00:00:00.001 Started by upstream project "autotest-per-patch" build number 131277 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.061 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.062 The recommended git tool is: git 00:00:00.062 using credential 00000000-0000-0000-0000-000000000002 00:00:00.065 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.112 Fetching changes from the remote Git repository 00:00:00.115 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.300 > git --version # 'git version 2.39.2' 00:00:00.300 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.354 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.354 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.171 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.185 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.200 Checking out Revision 58e4f482292076ec19d68e6712473e60ef956aed (FETCH_HEAD) 00:00:05.200 > git config core.sparsecheckout # timeout=10 00:00:05.214 > git read-tree -mu HEAD # timeout=10 00:00:05.233 > git checkout -f 58e4f482292076ec19d68e6712473e60ef956aed # timeout=5 00:00:05.255 Commit message: "packer: Fix typo in a package name" 00:00:05.255 > git rev-list --no-walk 58e4f482292076ec19d68e6712473e60ef956aed # timeout=10 00:00:05.361 [Pipeline] Start of Pipeline 00:00:05.375 [Pipeline] library 00:00:05.376 Loading library shm_lib@master 00:00:05.377 Library shm_lib@master is cached. Copying from home. 00:00:05.392 [Pipeline] node 00:00:05.404 Running on WFP29 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.406 [Pipeline] { 00:00:05.416 [Pipeline] catchError 00:00:05.417 [Pipeline] { 00:00:05.431 [Pipeline] wrap 00:00:05.441 [Pipeline] { 00:00:05.449 [Pipeline] stage 00:00:05.451 [Pipeline] { (Prologue) 00:00:05.659 [Pipeline] sh 00:00:05.948 + logger -p user.info -t JENKINS-CI 00:00:05.964 [Pipeline] echo 00:00:05.966 Node: WFP29 00:00:05.971 [Pipeline] sh 00:00:06.262 [Pipeline] setCustomBuildProperty 00:00:06.268 [Pipeline] echo 00:00:06.269 Cleanup processes 00:00:06.273 [Pipeline] sh 00:00:06.548 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.548 400118 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.561 [Pipeline] sh 00:00:06.845 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.846 ++ grep -v 'sudo pgrep' 00:00:06.846 ++ awk '{print $1}' 00:00:06.846 + sudo kill -9 00:00:06.846 + true 00:00:06.861 [Pipeline] cleanWs 00:00:06.873 [WS-CLEANUP] Deleting project workspace... 00:00:06.873 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.878 [WS-CLEANUP] done 00:00:06.883 [Pipeline] setCustomBuildProperty 00:00:06.900 [Pipeline] sh 00:00:07.178 + sudo git config --global --replace-all safe.directory '*' 00:00:07.277 [Pipeline] httpRequest 00:00:07.665 [Pipeline] echo 00:00:07.667 Sorcerer 10.211.164.101 is alive 00:00:07.677 [Pipeline] retry 00:00:07.679 [Pipeline] { 00:00:07.691 [Pipeline] httpRequest 00:00:07.695 HttpMethod: GET 00:00:07.695 URL: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:07.696 Sending request to url: http://10.211.164.101/packages/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:07.697 Response Code: HTTP/1.1 200 OK 00:00:07.697 Success: Status code 200 is in the accepted range: 200,404 00:00:07.698 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:08.341 [Pipeline] } 00:00:08.359 [Pipeline] // retry 00:00:08.367 [Pipeline] sh 00:00:08.650 + tar --no-same-owner -xf jbp_58e4f482292076ec19d68e6712473e60ef956aed.tar.gz 00:00:08.667 [Pipeline] httpRequest 00:00:09.071 [Pipeline] echo 00:00:09.075 Sorcerer 10.211.164.101 is alive 00:00:09.123 [Pipeline] retry 00:00:09.125 [Pipeline] { 00:00:09.135 [Pipeline] httpRequest 00:00:09.139 HttpMethod: GET 00:00:09.139 URL: http://10.211.164.101/packages/spdk_264c0dc1a6040f57961765b091d38be2046b546b.tar.gz 00:00:09.139 Sending request to url: http://10.211.164.101/packages/spdk_264c0dc1a6040f57961765b091d38be2046b546b.tar.gz 00:00:09.152 Response Code: HTTP/1.1 200 OK 00:00:09.152 Success: Status code 200 is in the accepted range: 200,404 00:00:09.153 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_264c0dc1a6040f57961765b091d38be2046b546b.tar.gz 00:00:40.841 [Pipeline] } 00:00:40.859 [Pipeline] // retry 00:00:40.866 [Pipeline] sh 00:00:41.151 + tar --no-same-owner -xf spdk_264c0dc1a6040f57961765b091d38be2046b546b.tar.gz 00:00:43.695 [Pipeline] sh 00:00:43.979 + git -C spdk log --oneline -n5 00:00:43.979 264c0dc1a thread: add spdk_iobuf_node_cache 00:00:43.979 ca6f8fabc thread: rearrange spdk_iobuf_initialize() 00:00:43.979 026239f05 thread: remove pool parameter from spdk_iobuf_for_each_entry 00:00:43.979 ffd9f7465 bdev/nvme: Fix crash due to NULL io_path 00:00:43.979 ee513ce4a lib/reduce: If init fails, unlink meta file 00:00:43.991 [Pipeline] } 00:00:44.006 [Pipeline] // stage 00:00:44.015 [Pipeline] stage 00:00:44.018 [Pipeline] { (Prepare) 00:00:44.035 [Pipeline] writeFile 00:00:44.051 [Pipeline] sh 00:00:44.337 + logger -p user.info -t JENKINS-CI 00:00:44.351 [Pipeline] sh 00:00:44.632 + logger -p user.info -t JENKINS-CI 00:00:44.644 [Pipeline] sh 00:00:44.929 + cat autorun-spdk.conf 00:00:44.929 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:44.929 SPDK_TEST_NVMF=1 00:00:44.929 SPDK_TEST_NVME_CLI=1 00:00:44.929 SPDK_TEST_NVMF_NICS=mlx5 00:00:44.929 SPDK_RUN_UBSAN=1 00:00:44.929 NET_TYPE=phy 00:00:44.937 RUN_NIGHTLY=0 00:00:44.941 [Pipeline] readFile 00:00:44.965 [Pipeline] withEnv 00:00:44.967 [Pipeline] { 00:00:44.980 [Pipeline] sh 00:00:45.270 + set -ex 00:00:45.271 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:00:45.271 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:45.271 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:45.271 ++ SPDK_TEST_NVMF=1 00:00:45.271 ++ SPDK_TEST_NVME_CLI=1 00:00:45.271 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:45.271 ++ SPDK_RUN_UBSAN=1 00:00:45.271 ++ NET_TYPE=phy 00:00:45.271 ++ RUN_NIGHTLY=0 00:00:45.271 + case $SPDK_TEST_NVMF_NICS in 00:00:45.271 + DRIVERS=mlx5_ib 00:00:45.271 + [[ -n mlx5_ib ]] 00:00:45.271 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:45.271 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:47.813 rmmod: ERROR: Module irdma is not currently loaded 00:00:47.813 rmmod: ERROR: Module i40iw is not currently loaded 00:00:47.813 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:47.813 + true 00:00:47.813 + for D in $DRIVERS 00:00:47.813 + sudo modprobe mlx5_ib 00:00:48.072 + exit 0 00:00:48.082 [Pipeline] } 00:00:48.098 [Pipeline] // withEnv 00:00:48.103 [Pipeline] } 00:00:48.118 [Pipeline] // stage 00:00:48.129 [Pipeline] catchError 00:00:48.131 [Pipeline] { 00:00:48.145 [Pipeline] timeout 00:00:48.145 Timeout set to expire in 1 hr 0 min 00:00:48.147 [Pipeline] { 00:00:48.162 [Pipeline] stage 00:00:48.164 [Pipeline] { (Tests) 00:00:48.178 [Pipeline] sh 00:00:48.464 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:00:48.464 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:00:48.464 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:00:48.464 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:00:48.464 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:48.464 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:00:48.464 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:00:48.464 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:48.464 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:00:48.464 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:00:48.464 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:00:48.464 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:00:48.464 + source /etc/os-release 00:00:48.464 ++ NAME='Fedora Linux' 00:00:48.464 ++ VERSION='39 (Cloud Edition)' 00:00:48.464 ++ ID=fedora 00:00:48.464 ++ VERSION_ID=39 00:00:48.464 ++ VERSION_CODENAME= 00:00:48.464 ++ PLATFORM_ID=platform:f39 00:00:48.464 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:48.464 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:48.464 ++ LOGO=fedora-logo-icon 00:00:48.464 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:48.464 ++ HOME_URL=https://fedoraproject.org/ 00:00:48.464 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:48.464 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:48.464 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:48.464 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:48.464 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:48.464 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:48.464 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:48.464 ++ SUPPORT_END=2024-11-12 00:00:48.464 ++ VARIANT='Cloud Edition' 00:00:48.464 ++ VARIANT_ID=cloud 00:00:48.464 + uname -a 00:00:48.464 Linux spdk-wfp-29 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:48.464 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:00:51.003 Hugepages 00:00:51.003 node hugesize free / total 00:00:51.003 node0 1048576kB 0 / 0 00:00:51.003 node0 2048kB 0 / 0 00:00:51.003 node1 1048576kB 0 / 0 00:00:51.003 node1 2048kB 0 / 0 00:00:51.003 00:00:51.003 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:51.003 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:51.003 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:51.003 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:51.003 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:51.003 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:51.003 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:51.003 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:51.003 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:51.003 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:51.003 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:51.003 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:51.003 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:51.003 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:51.003 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:51.003 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:51.003 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:51.004 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:51.262 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:00:51.262 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:00:51.262 + rm -f /tmp/spdk-ld-path 00:00:51.262 + source autorun-spdk.conf 00:00:51.262 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.262 ++ SPDK_TEST_NVMF=1 00:00:51.262 ++ SPDK_TEST_NVME_CLI=1 00:00:51.262 ++ SPDK_TEST_NVMF_NICS=mlx5 00:00:51.262 ++ SPDK_RUN_UBSAN=1 00:00:51.262 ++ NET_TYPE=phy 00:00:51.262 ++ RUN_NIGHTLY=0 00:00:51.262 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:51.262 + [[ -n '' ]] 00:00:51.262 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:51.262 + for M in /var/spdk/build-*-manifest.txt 00:00:51.262 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:51.262 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:51.262 + for M in /var/spdk/build-*-manifest.txt 00:00:51.262 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:51.262 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:51.262 + for M in /var/spdk/build-*-manifest.txt 00:00:51.262 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:51.262 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:00:51.262 ++ uname 00:00:51.262 + [[ Linux == \L\i\n\u\x ]] 00:00:51.262 + sudo dmesg -T 00:00:51.262 + sudo dmesg --clear 00:00:51.521 + dmesg_pid=401108 00:00:51.521 + [[ Fedora Linux == FreeBSD ]] 00:00:51.521 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:51.521 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:51.521 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:51.521 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:51.521 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:51.521 + [[ -x /usr/src/fio-static/fio ]] 00:00:51.521 + export FIO_BIN=/usr/src/fio-static/fio 00:00:51.521 + FIO_BIN=/usr/src/fio-static/fio 00:00:51.521 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:51.521 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:51.521 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:51.521 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:51.521 + sudo dmesg -Tw 00:00:51.521 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:51.521 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:51.521 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:51.521 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:51.521 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:00:51.521 Test configuration: 00:00:51.521 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.521 SPDK_TEST_NVMF=1 00:00:51.521 SPDK_TEST_NVME_CLI=1 00:00:51.521 SPDK_TEST_NVMF_NICS=mlx5 00:00:51.521 SPDK_RUN_UBSAN=1 00:00:51.521 NET_TYPE=phy 00:00:51.521 RUN_NIGHTLY=0 17:25:29 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:00:51.521 17:25:29 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:00:51.521 17:25:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:51.521 17:25:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:51.521 17:25:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:51.521 17:25:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:51.521 17:25:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.521 17:25:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.521 17:25:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.521 17:25:29 -- paths/export.sh@5 -- $ export PATH 00:00:51.521 17:25:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:51.521 17:25:29 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:00:51.521 17:25:29 -- common/autobuild_common.sh@486 -- $ date +%s 00:00:51.521 17:25:29 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729178729.XXXXXX 00:00:51.521 17:25:29 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729178729.z68N0F 00:00:51.521 17:25:29 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:00:51.521 17:25:29 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:00:51.521 17:25:29 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:00:51.521 17:25:29 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:51.521 17:25:29 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:51.521 17:25:29 -- common/autobuild_common.sh@502 -- $ get_config_params 00:00:51.521 17:25:29 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:00:51.521 17:25:29 -- common/autotest_common.sh@10 -- $ set +x 00:00:51.522 17:25:29 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:51.522 17:25:29 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:00:51.522 17:25:29 -- pm/common@17 -- $ local monitor 00:00:51.522 17:25:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.522 17:25:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.522 17:25:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.522 17:25:29 -- pm/common@21 -- $ date +%s 00:00:51.522 17:25:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:51.522 17:25:29 -- pm/common@21 -- $ date +%s 00:00:51.522 17:25:29 -- pm/common@25 -- $ sleep 1 00:00:51.522 17:25:29 -- pm/common@21 -- $ date +%s 00:00:51.522 17:25:29 -- pm/common@21 -- $ date +%s 00:00:51.522 17:25:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729178729 00:00:51.522 17:25:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729178729 00:00:51.522 17:25:29 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729178729 00:00:51.522 17:25:29 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1729178729 00:00:51.522 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729178729_collect-cpu-load.pm.log 00:00:51.522 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729178729_collect-vmstat.pm.log 00:00:51.522 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729178729_collect-cpu-temp.pm.log 00:00:51.522 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1729178729_collect-bmc-pm.bmc.pm.log 00:00:52.457 17:25:30 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:00:52.457 17:25:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:52.457 17:25:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:52.457 17:25:30 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:52.457 17:25:30 -- spdk/autobuild.sh@16 -- $ date -u 00:00:52.457 Thu Oct 17 03:25:30 PM UTC 2024 00:00:52.457 17:25:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:52.457 v25.01-pre-75-g264c0dc1a 00:00:52.457 17:25:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:52.457 17:25:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:52.457 17:25:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:52.457 17:25:30 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:00:52.457 17:25:30 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:00:52.457 17:25:30 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.457 ************************************ 00:00:52.457 START TEST ubsan 00:00:52.457 ************************************ 00:00:52.457 17:25:30 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:00:52.457 using ubsan 00:00:52.457 00:00:52.457 real 0m0.001s 00:00:52.457 user 0m0.000s 00:00:52.457 sys 0m0.000s 00:00:52.457 17:25:30 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:00:52.457 17:25:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:52.457 ************************************ 00:00:52.457 END TEST ubsan 00:00:52.457 ************************************ 00:00:52.715 17:25:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:52.715 17:25:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:52.715 17:25:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:52.715 17:25:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:52.715 17:25:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:52.715 17:25:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:52.715 17:25:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:52.715 17:25:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:52.715 17:25:30 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:52.715 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:00:52.715 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:00:52.973 Using 'verbs' RDMA provider 00:01:06.115 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:20.988 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:20.988 Creating mk/config.mk...done. 00:01:20.988 Creating mk/cc.flags.mk...done. 00:01:20.988 Type 'make' to build. 00:01:20.988 17:25:58 -- spdk/autobuild.sh@70 -- $ run_test make make -j72 00:01:20.988 17:25:58 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:20.988 17:25:58 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:20.988 17:25:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.988 ************************************ 00:01:20.988 START TEST make 00:01:20.988 ************************************ 00:01:20.988 17:25:58 make -- common/autotest_common.sh@1125 -- $ make -j72 00:01:20.988 make[1]: Nothing to be done for 'all'. 00:01:29.142 The Meson build system 00:01:29.142 Version: 1.5.0 00:01:29.142 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:01:29.142 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:01:29.142 Build type: native build 00:01:29.142 Program cat found: YES (/usr/bin/cat) 00:01:29.142 Project name: DPDK 00:01:29.142 Project version: 24.03.0 00:01:29.142 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:29.142 C linker for the host machine: cc ld.bfd 2.40-14 00:01:29.142 Host machine cpu family: x86_64 00:01:29.142 Host machine cpu: x86_64 00:01:29.142 Message: ## Building in Developer Mode ## 00:01:29.142 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:29.142 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:29.142 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:29.142 Program python3 found: YES (/usr/bin/python3) 00:01:29.142 Program cat found: YES (/usr/bin/cat) 00:01:29.142 Compiler for C supports arguments -march=native: YES 00:01:29.142 Checking for size of "void *" : 8 00:01:29.142 Checking for size of "void *" : 8 (cached) 00:01:29.142 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:29.142 Library m found: YES 00:01:29.142 Library numa found: YES 00:01:29.142 Has header "numaif.h" : YES 00:01:29.142 Library fdt found: NO 00:01:29.142 Library execinfo found: NO 00:01:29.142 Has header "execinfo.h" : YES 00:01:29.142 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:29.142 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:29.142 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:29.142 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:29.142 Run-time dependency openssl found: YES 3.1.1 00:01:29.142 Run-time dependency libpcap found: YES 1.10.4 00:01:29.142 Has header "pcap.h" with dependency libpcap: YES 00:01:29.142 Compiler for C supports arguments -Wcast-qual: YES 00:01:29.142 Compiler for C supports arguments -Wdeprecated: YES 00:01:29.142 Compiler for C supports arguments -Wformat: YES 00:01:29.142 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:29.142 Compiler for C supports arguments -Wformat-security: NO 00:01:29.142 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.142 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:29.142 Compiler for C supports arguments -Wnested-externs: YES 00:01:29.142 Compiler for C supports arguments -Wold-style-definition: YES 00:01:29.142 Compiler for C supports arguments -Wpointer-arith: YES 00:01:29.142 Compiler for C supports arguments -Wsign-compare: YES 00:01:29.142 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:29.142 Compiler for C supports arguments -Wundef: YES 00:01:29.142 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.142 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:29.142 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:29.142 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.142 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:29.142 Program objdump found: YES (/usr/bin/objdump) 00:01:29.142 Compiler for C supports arguments -mavx512f: YES 00:01:29.142 Checking if "AVX512 checking" compiles: YES 00:01:29.142 Fetching value of define "__SSE4_2__" : 1 00:01:29.142 Fetching value of define "__AES__" : 1 00:01:29.142 Fetching value of define "__AVX__" : 1 00:01:29.142 Fetching value of define "__AVX2__" : 1 00:01:29.142 Fetching value of define "__AVX512BW__" : 1 00:01:29.142 Fetching value of define "__AVX512CD__" : 1 00:01:29.142 Fetching value of define "__AVX512DQ__" : 1 00:01:29.142 Fetching value of define "__AVX512F__" : 1 00:01:29.142 Fetching value of define "__AVX512VL__" : 1 00:01:29.142 Fetching value of define "__PCLMUL__" : 1 00:01:29.142 Fetching value of define "__RDRND__" : 1 00:01:29.142 Fetching value of define "__RDSEED__" : 1 00:01:29.142 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:29.142 Fetching value of define "__znver1__" : (undefined) 00:01:29.142 Fetching value of define "__znver2__" : (undefined) 00:01:29.142 Fetching value of define "__znver3__" : (undefined) 00:01:29.142 Fetching value of define "__znver4__" : (undefined) 00:01:29.142 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:29.142 Message: lib/log: Defining dependency "log" 00:01:29.142 Message: lib/kvargs: Defining dependency "kvargs" 00:01:29.142 Message: lib/telemetry: Defining dependency "telemetry" 00:01:29.142 Checking for function "getentropy" : NO 00:01:29.142 Message: lib/eal: Defining dependency "eal" 00:01:29.142 Message: lib/ring: Defining dependency "ring" 00:01:29.142 Message: lib/rcu: Defining dependency "rcu" 00:01:29.142 Message: lib/mempool: Defining dependency "mempool" 00:01:29.142 Message: lib/mbuf: Defining dependency "mbuf" 00:01:29.142 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:29.142 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:29.142 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:29.142 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:29.142 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:29.142 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:29.142 Compiler for C supports arguments -mpclmul: YES 00:01:29.142 Compiler for C supports arguments -maes: YES 00:01:29.142 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.142 Compiler for C supports arguments -mavx512bw: YES 00:01:29.142 Compiler for C supports arguments -mavx512dq: YES 00:01:29.142 Compiler for C supports arguments -mavx512vl: YES 00:01:29.143 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:29.143 Compiler for C supports arguments -mavx2: YES 00:01:29.143 Compiler for C supports arguments -mavx: YES 00:01:29.143 Message: lib/net: Defining dependency "net" 00:01:29.143 Message: lib/meter: Defining dependency "meter" 00:01:29.143 Message: lib/ethdev: Defining dependency "ethdev" 00:01:29.143 Message: lib/pci: Defining dependency "pci" 00:01:29.143 Message: lib/cmdline: Defining dependency "cmdline" 00:01:29.143 Message: lib/hash: Defining dependency "hash" 00:01:29.143 Message: lib/timer: Defining dependency "timer" 00:01:29.143 Message: lib/compressdev: Defining dependency "compressdev" 00:01:29.143 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:29.143 Message: lib/dmadev: Defining dependency "dmadev" 00:01:29.143 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:29.143 Message: lib/power: Defining dependency "power" 00:01:29.143 Message: lib/reorder: Defining dependency "reorder" 00:01:29.143 Message: lib/security: Defining dependency "security" 00:01:29.143 Has header "linux/userfaultfd.h" : YES 00:01:29.143 Has header "linux/vduse.h" : YES 00:01:29.143 Message: lib/vhost: Defining dependency "vhost" 00:01:29.143 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.143 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.143 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.143 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.143 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:29.143 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:29.143 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:29.143 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:29.143 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:29.143 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:29.143 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:29.143 Configuring doxy-api-html.conf using configuration 00:01:29.143 Configuring doxy-api-man.conf using configuration 00:01:29.143 Program mandb found: YES (/usr/bin/mandb) 00:01:29.143 Program sphinx-build found: NO 00:01:29.143 Configuring rte_build_config.h using configuration 00:01:29.143 Message: 00:01:29.143 ================= 00:01:29.143 Applications Enabled 00:01:29.143 ================= 00:01:29.143 00:01:29.143 apps: 00:01:29.143 00:01:29.143 00:01:29.143 Message: 00:01:29.143 ================= 00:01:29.143 Libraries Enabled 00:01:29.143 ================= 00:01:29.143 00:01:29.143 libs: 00:01:29.143 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:29.143 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:29.143 cryptodev, dmadev, power, reorder, security, vhost, 00:01:29.143 00:01:29.143 Message: 00:01:29.143 =============== 00:01:29.143 Drivers Enabled 00:01:29.143 =============== 00:01:29.143 00:01:29.143 common: 00:01:29.143 00:01:29.143 bus: 00:01:29.143 pci, vdev, 00:01:29.143 mempool: 00:01:29.143 ring, 00:01:29.143 dma: 00:01:29.143 00:01:29.143 net: 00:01:29.143 00:01:29.143 crypto: 00:01:29.143 00:01:29.143 compress: 00:01:29.143 00:01:29.143 vdpa: 00:01:29.143 00:01:29.143 00:01:29.143 Message: 00:01:29.143 ================= 00:01:29.143 Content Skipped 00:01:29.143 ================= 00:01:29.143 00:01:29.143 apps: 00:01:29.143 dumpcap: explicitly disabled via build config 00:01:29.143 graph: explicitly disabled via build config 00:01:29.143 pdump: explicitly disabled via build config 00:01:29.143 proc-info: explicitly disabled via build config 00:01:29.143 test-acl: explicitly disabled via build config 00:01:29.143 test-bbdev: explicitly disabled via build config 00:01:29.143 test-cmdline: explicitly disabled via build config 00:01:29.143 test-compress-perf: explicitly disabled via build config 00:01:29.143 test-crypto-perf: explicitly disabled via build config 00:01:29.143 test-dma-perf: explicitly disabled via build config 00:01:29.143 test-eventdev: explicitly disabled via build config 00:01:29.143 test-fib: explicitly disabled via build config 00:01:29.143 test-flow-perf: explicitly disabled via build config 00:01:29.143 test-gpudev: explicitly disabled via build config 00:01:29.143 test-mldev: explicitly disabled via build config 00:01:29.143 test-pipeline: explicitly disabled via build config 00:01:29.143 test-pmd: explicitly disabled via build config 00:01:29.143 test-regex: explicitly disabled via build config 00:01:29.143 test-sad: explicitly disabled via build config 00:01:29.143 test-security-perf: explicitly disabled via build config 00:01:29.143 00:01:29.143 libs: 00:01:29.143 argparse: explicitly disabled via build config 00:01:29.143 metrics: explicitly disabled via build config 00:01:29.143 acl: explicitly disabled via build config 00:01:29.143 bbdev: explicitly disabled via build config 00:01:29.143 bitratestats: explicitly disabled via build config 00:01:29.143 bpf: explicitly disabled via build config 00:01:29.143 cfgfile: explicitly disabled via build config 00:01:29.143 distributor: explicitly disabled via build config 00:01:29.143 efd: explicitly disabled via build config 00:01:29.143 eventdev: explicitly disabled via build config 00:01:29.143 dispatcher: explicitly disabled via build config 00:01:29.143 gpudev: explicitly disabled via build config 00:01:29.143 gro: explicitly disabled via build config 00:01:29.143 gso: explicitly disabled via build config 00:01:29.143 ip_frag: explicitly disabled via build config 00:01:29.143 jobstats: explicitly disabled via build config 00:01:29.143 latencystats: explicitly disabled via build config 00:01:29.143 lpm: explicitly disabled via build config 00:01:29.143 member: explicitly disabled via build config 00:01:29.143 pcapng: explicitly disabled via build config 00:01:29.143 rawdev: explicitly disabled via build config 00:01:29.143 regexdev: explicitly disabled via build config 00:01:29.143 mldev: explicitly disabled via build config 00:01:29.143 rib: explicitly disabled via build config 00:01:29.143 sched: explicitly disabled via build config 00:01:29.143 stack: explicitly disabled via build config 00:01:29.143 ipsec: explicitly disabled via build config 00:01:29.143 pdcp: explicitly disabled via build config 00:01:29.143 fib: explicitly disabled via build config 00:01:29.143 port: explicitly disabled via build config 00:01:29.143 pdump: explicitly disabled via build config 00:01:29.143 table: explicitly disabled via build config 00:01:29.143 pipeline: explicitly disabled via build config 00:01:29.143 graph: explicitly disabled via build config 00:01:29.143 node: explicitly disabled via build config 00:01:29.143 00:01:29.143 drivers: 00:01:29.143 common/cpt: not in enabled drivers build config 00:01:29.143 common/dpaax: not in enabled drivers build config 00:01:29.143 common/iavf: not in enabled drivers build config 00:01:29.143 common/idpf: not in enabled drivers build config 00:01:29.143 common/ionic: not in enabled drivers build config 00:01:29.143 common/mvep: not in enabled drivers build config 00:01:29.143 common/octeontx: not in enabled drivers build config 00:01:29.143 bus/auxiliary: not in enabled drivers build config 00:01:29.143 bus/cdx: not in enabled drivers build config 00:01:29.143 bus/dpaa: not in enabled drivers build config 00:01:29.143 bus/fslmc: not in enabled drivers build config 00:01:29.143 bus/ifpga: not in enabled drivers build config 00:01:29.143 bus/platform: not in enabled drivers build config 00:01:29.143 bus/uacce: not in enabled drivers build config 00:01:29.143 bus/vmbus: not in enabled drivers build config 00:01:29.143 common/cnxk: not in enabled drivers build config 00:01:29.143 common/mlx5: not in enabled drivers build config 00:01:29.143 common/nfp: not in enabled drivers build config 00:01:29.143 common/nitrox: not in enabled drivers build config 00:01:29.143 common/qat: not in enabled drivers build config 00:01:29.143 common/sfc_efx: not in enabled drivers build config 00:01:29.143 mempool/bucket: not in enabled drivers build config 00:01:29.143 mempool/cnxk: not in enabled drivers build config 00:01:29.143 mempool/dpaa: not in enabled drivers build config 00:01:29.143 mempool/dpaa2: not in enabled drivers build config 00:01:29.143 mempool/octeontx: not in enabled drivers build config 00:01:29.143 mempool/stack: not in enabled drivers build config 00:01:29.143 dma/cnxk: not in enabled drivers build config 00:01:29.143 dma/dpaa: not in enabled drivers build config 00:01:29.143 dma/dpaa2: not in enabled drivers build config 00:01:29.143 dma/hisilicon: not in enabled drivers build config 00:01:29.143 dma/idxd: not in enabled drivers build config 00:01:29.143 dma/ioat: not in enabled drivers build config 00:01:29.143 dma/skeleton: not in enabled drivers build config 00:01:29.143 net/af_packet: not in enabled drivers build config 00:01:29.143 net/af_xdp: not in enabled drivers build config 00:01:29.143 net/ark: not in enabled drivers build config 00:01:29.143 net/atlantic: not in enabled drivers build config 00:01:29.144 net/avp: not in enabled drivers build config 00:01:29.144 net/axgbe: not in enabled drivers build config 00:01:29.144 net/bnx2x: not in enabled drivers build config 00:01:29.144 net/bnxt: not in enabled drivers build config 00:01:29.144 net/bonding: not in enabled drivers build config 00:01:29.144 net/cnxk: not in enabled drivers build config 00:01:29.144 net/cpfl: not in enabled drivers build config 00:01:29.144 net/cxgbe: not in enabled drivers build config 00:01:29.144 net/dpaa: not in enabled drivers build config 00:01:29.144 net/dpaa2: not in enabled drivers build config 00:01:29.144 net/e1000: not in enabled drivers build config 00:01:29.144 net/ena: not in enabled drivers build config 00:01:29.144 net/enetc: not in enabled drivers build config 00:01:29.144 net/enetfec: not in enabled drivers build config 00:01:29.144 net/enic: not in enabled drivers build config 00:01:29.144 net/failsafe: not in enabled drivers build config 00:01:29.144 net/fm10k: not in enabled drivers build config 00:01:29.144 net/gve: not in enabled drivers build config 00:01:29.144 net/hinic: not in enabled drivers build config 00:01:29.144 net/hns3: not in enabled drivers build config 00:01:29.144 net/i40e: not in enabled drivers build config 00:01:29.144 net/iavf: not in enabled drivers build config 00:01:29.144 net/ice: not in enabled drivers build config 00:01:29.144 net/idpf: not in enabled drivers build config 00:01:29.144 net/igc: not in enabled drivers build config 00:01:29.144 net/ionic: not in enabled drivers build config 00:01:29.144 net/ipn3ke: not in enabled drivers build config 00:01:29.144 net/ixgbe: not in enabled drivers build config 00:01:29.144 net/mana: not in enabled drivers build config 00:01:29.144 net/memif: not in enabled drivers build config 00:01:29.144 net/mlx4: not in enabled drivers build config 00:01:29.144 net/mlx5: not in enabled drivers build config 00:01:29.144 net/mvneta: not in enabled drivers build config 00:01:29.144 net/mvpp2: not in enabled drivers build config 00:01:29.144 net/netvsc: not in enabled drivers build config 00:01:29.144 net/nfb: not in enabled drivers build config 00:01:29.144 net/nfp: not in enabled drivers build config 00:01:29.144 net/ngbe: not in enabled drivers build config 00:01:29.144 net/null: not in enabled drivers build config 00:01:29.144 net/octeontx: not in enabled drivers build config 00:01:29.144 net/octeon_ep: not in enabled drivers build config 00:01:29.144 net/pcap: not in enabled drivers build config 00:01:29.144 net/pfe: not in enabled drivers build config 00:01:29.144 net/qede: not in enabled drivers build config 00:01:29.144 net/ring: not in enabled drivers build config 00:01:29.144 net/sfc: not in enabled drivers build config 00:01:29.144 net/softnic: not in enabled drivers build config 00:01:29.144 net/tap: not in enabled drivers build config 00:01:29.144 net/thunderx: not in enabled drivers build config 00:01:29.144 net/txgbe: not in enabled drivers build config 00:01:29.144 net/vdev_netvsc: not in enabled drivers build config 00:01:29.144 net/vhost: not in enabled drivers build config 00:01:29.144 net/virtio: not in enabled drivers build config 00:01:29.144 net/vmxnet3: not in enabled drivers build config 00:01:29.144 raw/*: missing internal dependency, "rawdev" 00:01:29.144 crypto/armv8: not in enabled drivers build config 00:01:29.144 crypto/bcmfs: not in enabled drivers build config 00:01:29.144 crypto/caam_jr: not in enabled drivers build config 00:01:29.144 crypto/ccp: not in enabled drivers build config 00:01:29.144 crypto/cnxk: not in enabled drivers build config 00:01:29.144 crypto/dpaa_sec: not in enabled drivers build config 00:01:29.144 crypto/dpaa2_sec: not in enabled drivers build config 00:01:29.144 crypto/ipsec_mb: not in enabled drivers build config 00:01:29.144 crypto/mlx5: not in enabled drivers build config 00:01:29.144 crypto/mvsam: not in enabled drivers build config 00:01:29.144 crypto/nitrox: not in enabled drivers build config 00:01:29.144 crypto/null: not in enabled drivers build config 00:01:29.144 crypto/octeontx: not in enabled drivers build config 00:01:29.144 crypto/openssl: not in enabled drivers build config 00:01:29.144 crypto/scheduler: not in enabled drivers build config 00:01:29.144 crypto/uadk: not in enabled drivers build config 00:01:29.144 crypto/virtio: not in enabled drivers build config 00:01:29.144 compress/isal: not in enabled drivers build config 00:01:29.144 compress/mlx5: not in enabled drivers build config 00:01:29.144 compress/nitrox: not in enabled drivers build config 00:01:29.144 compress/octeontx: not in enabled drivers build config 00:01:29.144 compress/zlib: not in enabled drivers build config 00:01:29.144 regex/*: missing internal dependency, "regexdev" 00:01:29.144 ml/*: missing internal dependency, "mldev" 00:01:29.144 vdpa/ifc: not in enabled drivers build config 00:01:29.144 vdpa/mlx5: not in enabled drivers build config 00:01:29.144 vdpa/nfp: not in enabled drivers build config 00:01:29.144 vdpa/sfc: not in enabled drivers build config 00:01:29.144 event/*: missing internal dependency, "eventdev" 00:01:29.144 baseband/*: missing internal dependency, "bbdev" 00:01:29.144 gpu/*: missing internal dependency, "gpudev" 00:01:29.144 00:01:29.144 00:01:29.403 Build targets in project: 85 00:01:29.403 00:01:29.403 DPDK 24.03.0 00:01:29.403 00:01:29.403 User defined options 00:01:29.403 buildtype : debug 00:01:29.403 default_library : shared 00:01:29.403 libdir : lib 00:01:29.403 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:29.403 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:29.403 c_link_args : 00:01:29.403 cpu_instruction_set: native 00:01:29.403 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:29.403 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:29.403 enable_docs : false 00:01:29.403 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:29.403 enable_kmods : false 00:01:29.403 max_lcores : 128 00:01:29.403 tests : false 00:01:29.403 00:01:29.403 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.978 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:01:29.978 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:29.978 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:29.978 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:29.978 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:29.978 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:29.978 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:29.978 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:29.978 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:29.978 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:29.978 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:30.237 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:30.237 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:30.237 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:30.237 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:30.237 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:30.237 [16/268] Linking static target lib/librte_kvargs.a 00:01:30.237 [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:30.237 [18/268] Linking static target lib/librte_log.a 00:01:30.237 [19/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:30.497 [20/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:30.497 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:30.497 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:30.497 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:30.497 [24/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:30.497 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:30.497 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:30.497 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:30.497 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:30.497 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.497 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:30.497 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:30.497 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:30.497 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:30.497 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:30.497 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:30.497 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:30.497 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:30.497 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:30.497 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:30.497 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:30.497 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:30.497 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:30.498 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:30.498 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:30.498 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:30.498 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:30.498 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:30.498 [48/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:30.498 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:30.498 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:30.498 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:30.498 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:30.498 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:30.498 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:30.498 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:30.498 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:30.498 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:30.498 [58/268] Linking static target lib/librte_telemetry.a 00:01:30.498 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:30.498 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:30.498 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:30.498 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:30.498 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:30.498 [64/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:30.498 [65/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:30.498 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:30.498 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:30.498 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:30.498 [69/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:30.498 [70/268] Linking static target lib/librte_ring.a 00:01:30.763 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:30.763 [72/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:30.763 [73/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:30.763 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:30.763 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:30.763 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:30.763 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:30.763 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:30.763 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:30.763 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:30.763 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:30.763 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:30.763 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:30.764 [84/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:30.764 [85/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:30.764 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:30.764 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:30.764 [88/268] Linking static target lib/librte_pci.a 00:01:30.764 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:30.764 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:30.764 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:30.764 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:30.764 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:30.764 [94/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:30.764 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:30.764 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:30.764 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:30.764 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:30.764 [99/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:30.764 [100/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:30.764 [101/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.764 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:30.764 [103/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:30.764 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:30.764 [105/268] Linking static target lib/librte_mempool.a 00:01:30.764 [106/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:30.764 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:30.764 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:30.764 [109/268] Linking static target lib/librte_rcu.a 00:01:30.764 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:30.764 [111/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:31.023 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:31.023 [113/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:31.023 [114/268] Linking static target lib/librte_net.a 00:01:31.023 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:31.023 [116/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.023 [117/268] Linking static target lib/librte_meter.a 00:01:31.023 [118/268] Linking static target lib/librte_eal.a 00:01:31.023 [119/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.023 [120/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:31.023 [121/268] Linking static target lib/librte_mbuf.a 00:01:31.023 [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:31.023 [123/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:31.023 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:31.023 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:31.023 [126/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:31.023 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:31.023 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:31.023 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:31.023 [130/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.023 [131/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:31.023 [132/268] Linking static target lib/librte_cmdline.a 00:01:31.023 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:31.023 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:31.023 [135/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:31.282 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:31.282 [137/268] Linking static target lib/librte_timer.a 00:01:31.282 [138/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.282 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:31.282 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:31.282 [141/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:31.282 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:31.282 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:31.282 [144/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:31.282 [145/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:31.282 [146/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:31.282 [147/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:31.282 [148/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:31.282 [149/268] Linking target lib/librte_log.so.24.1 00:01:31.282 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:31.282 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:31.282 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:31.282 [153/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.282 [154/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.282 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:31.282 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:31.282 [157/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:31.282 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:31.282 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:31.282 [160/268] Linking static target lib/librte_dmadev.a 00:01:31.282 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:31.282 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:31.282 [163/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.282 [164/268] Linking static target lib/librte_compressdev.a 00:01:31.282 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:31.282 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:31.282 [167/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.282 [168/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:31.282 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:31.282 [170/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:31.282 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:31.282 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:31.282 [173/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:31.282 [174/268] Linking static target lib/librte_power.a 00:01:31.282 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:31.282 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:31.282 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:31.282 [178/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:31.282 [179/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:31.282 [180/268] Linking static target lib/librte_reorder.a 00:01:31.282 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:31.282 [182/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:31.282 [183/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:31.282 [184/268] Linking static target lib/librte_security.a 00:01:31.282 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:31.282 [186/268] Linking target lib/librte_kvargs.so.24.1 00:01:31.282 [187/268] Linking target lib/librte_telemetry.so.24.1 00:01:31.282 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:31.282 [189/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:31.542 [190/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:31.542 [191/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:31.542 [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:31.542 [193/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:31.542 [194/268] Linking static target lib/librte_hash.a 00:01:31.542 [195/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.542 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:31.542 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:31.542 [198/268] Linking static target drivers/librte_bus_vdev.a 00:01:31.542 [199/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:31.542 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:31.542 [201/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:31.542 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:31.542 [203/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:31.542 [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:31.542 [205/268] Linking static target drivers/librte_bus_pci.a 00:01:31.542 [206/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.542 [207/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.542 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:31.542 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:31.542 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:31.542 [211/268] Linking static target drivers/librte_mempool_ring.a 00:01:31.542 [212/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:31.542 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:31.802 [214/268] Linking static target lib/librte_cryptodev.a 00:01:31.802 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.802 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.802 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.802 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.061 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.061 [220/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.061 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.061 [222/268] Linking static target lib/librte_ethdev.a 00:01:32.061 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:32.319 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.319 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.319 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.579 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.149 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:33.149 [229/268] Linking static target lib/librte_vhost.a 00:01:34.086 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.466 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.232 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.492 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.751 [234/268] Linking target lib/librte_eal.so.24.1 00:01:42.751 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:42.751 [236/268] Linking target lib/librte_pci.so.24.1 00:01:42.751 [237/268] Linking target lib/librte_ring.so.24.1 00:01:42.751 [238/268] Linking target lib/librte_meter.so.24.1 00:01:42.751 [239/268] Linking target lib/librte_dmadev.so.24.1 00:01:42.751 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:42.751 [241/268] Linking target lib/librte_timer.so.24.1 00:01:43.011 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:43.011 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:43.011 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:43.011 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:43.011 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:43.011 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:43.011 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:43.011 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:43.011 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:43.011 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:43.270 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:43.271 [253/268] Linking target lib/librte_mbuf.so.24.1 00:01:43.271 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:43.271 [255/268] Linking target lib/librte_net.so.24.1 00:01:43.271 [256/268] Linking target lib/librte_compressdev.so.24.1 00:01:43.271 [257/268] Linking target lib/librte_reorder.so.24.1 00:01:43.271 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:43.529 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:43.529 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:43.529 [261/268] Linking target lib/librte_security.so.24.1 00:01:43.529 [262/268] Linking target lib/librte_hash.so.24.1 00:01:43.529 [263/268] Linking target lib/librte_cmdline.so.24.1 00:01:43.529 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:43.529 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:43.788 [266/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:43.788 [267/268] Linking target lib/librte_power.so.24.1 00:01:43.788 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:43.788 INFO: autodetecting backend as ninja 00:01:43.788 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 72 00:01:53.763 CC lib/log/log.o 00:01:53.763 CC lib/log/log_flags.o 00:01:53.763 CC lib/log/log_deprecated.o 00:01:53.763 CC lib/ut_mock/mock.o 00:01:53.763 CC lib/ut/ut.o 00:01:53.763 LIB libspdk_ut_mock.a 00:01:53.763 LIB libspdk_log.a 00:01:53.763 LIB libspdk_ut.a 00:01:53.763 SO libspdk_ut_mock.so.6.0 00:01:53.763 SO libspdk_log.so.7.1 00:01:53.763 SO libspdk_ut.so.2.0 00:01:53.763 SYMLINK libspdk_ut_mock.so 00:01:53.763 SYMLINK libspdk_log.so 00:01:53.763 SYMLINK libspdk_ut.so 00:01:53.763 CXX lib/trace_parser/trace.o 00:01:53.763 CC lib/ioat/ioat.o 00:01:53.763 CC lib/util/base64.o 00:01:53.763 CC lib/util/crc16.o 00:01:53.763 CC lib/util/bit_array.o 00:01:53.763 CC lib/util/cpuset.o 00:01:53.763 CC lib/util/crc32.o 00:01:53.763 CC lib/util/crc32c.o 00:01:53.763 CC lib/dma/dma.o 00:01:53.763 CC lib/util/crc32_ieee.o 00:01:53.763 CC lib/util/crc64.o 00:01:53.763 CC lib/util/dif.o 00:01:53.763 CC lib/util/fd.o 00:01:53.763 CC lib/util/fd_group.o 00:01:53.763 CC lib/util/file.o 00:01:53.763 CC lib/util/hexlify.o 00:01:53.763 CC lib/util/iov.o 00:01:53.763 CC lib/util/math.o 00:01:53.763 CC lib/util/net.o 00:01:53.763 CC lib/util/pipe.o 00:01:53.763 CC lib/util/strerror_tls.o 00:01:53.763 CC lib/util/string.o 00:01:53.763 CC lib/util/uuid.o 00:01:53.763 CC lib/util/xor.o 00:01:53.763 CC lib/util/zipf.o 00:01:53.763 CC lib/util/md5.o 00:01:53.763 CC lib/vfio_user/host/vfio_user_pci.o 00:01:53.763 CC lib/vfio_user/host/vfio_user.o 00:01:54.021 LIB libspdk_dma.a 00:01:54.022 LIB libspdk_ioat.a 00:01:54.022 SO libspdk_dma.so.5.0 00:01:54.022 SO libspdk_ioat.so.7.0 00:01:54.022 SYMLINK libspdk_dma.so 00:01:54.022 SYMLINK libspdk_ioat.so 00:01:54.022 LIB libspdk_vfio_user.a 00:01:54.022 SO libspdk_vfio_user.so.5.0 00:01:54.022 LIB libspdk_util.a 00:01:54.280 SYMLINK libspdk_vfio_user.so 00:01:54.280 SO libspdk_util.so.10.0 00:01:54.280 SYMLINK libspdk_util.so 00:01:54.280 LIB libspdk_trace_parser.a 00:01:54.538 SO libspdk_trace_parser.so.6.0 00:01:54.538 SYMLINK libspdk_trace_parser.so 00:01:54.797 CC lib/idxd/idxd_kernel.o 00:01:54.797 CC lib/idxd/idxd.o 00:01:54.797 CC lib/idxd/idxd_user.o 00:01:54.797 CC lib/rdma_provider/common.o 00:01:54.797 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:54.797 CC lib/json/json_write.o 00:01:54.797 CC lib/json/json_parse.o 00:01:54.797 CC lib/json/json_util.o 00:01:54.797 CC lib/vmd/vmd.o 00:01:54.797 CC lib/vmd/led.o 00:01:54.797 CC lib/env_dpdk/memory.o 00:01:54.797 CC lib/env_dpdk/env.o 00:01:54.797 CC lib/env_dpdk/init.o 00:01:54.797 CC lib/env_dpdk/pci.o 00:01:54.797 CC lib/env_dpdk/threads.o 00:01:54.797 CC lib/env_dpdk/pci_ioat.o 00:01:54.797 CC lib/env_dpdk/pci_virtio.o 00:01:54.797 CC lib/env_dpdk/pci_vmd.o 00:01:54.797 CC lib/env_dpdk/sigbus_handler.o 00:01:54.797 CC lib/env_dpdk/pci_idxd.o 00:01:54.797 CC lib/conf/conf.o 00:01:54.797 CC lib/env_dpdk/pci_event.o 00:01:54.797 CC lib/env_dpdk/pci_dpdk.o 00:01:54.797 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:54.797 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:54.797 CC lib/rdma_utils/rdma_utils.o 00:01:54.797 LIB libspdk_rdma_provider.a 00:01:54.797 SO libspdk_rdma_provider.so.6.0 00:01:55.056 LIB libspdk_conf.a 00:01:55.056 LIB libspdk_json.a 00:01:55.056 SO libspdk_conf.so.6.0 00:01:55.056 SYMLINK libspdk_rdma_provider.so 00:01:55.056 LIB libspdk_rdma_utils.a 00:01:55.056 SO libspdk_json.so.6.0 00:01:55.056 SO libspdk_rdma_utils.so.1.0 00:01:55.056 SYMLINK libspdk_conf.so 00:01:55.056 SYMLINK libspdk_json.so 00:01:55.056 SYMLINK libspdk_rdma_utils.so 00:01:55.056 LIB libspdk_idxd.a 00:01:55.315 SO libspdk_idxd.so.12.1 00:01:55.315 LIB libspdk_vmd.a 00:01:55.315 SO libspdk_vmd.so.6.0 00:01:55.315 SYMLINK libspdk_idxd.so 00:01:55.315 SYMLINK libspdk_vmd.so 00:01:55.315 CC lib/jsonrpc/jsonrpc_server.o 00:01:55.315 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:55.315 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:55.315 CC lib/jsonrpc/jsonrpc_client.o 00:01:55.575 LIB libspdk_jsonrpc.a 00:01:55.575 SO libspdk_jsonrpc.so.6.0 00:01:55.834 SYMLINK libspdk_jsonrpc.so 00:01:55.834 LIB libspdk_env_dpdk.a 00:01:55.834 SO libspdk_env_dpdk.so.15.0 00:01:55.834 SYMLINK libspdk_env_dpdk.so 00:01:56.093 CC lib/rpc/rpc.o 00:01:56.093 LIB libspdk_rpc.a 00:01:56.352 SO libspdk_rpc.so.6.0 00:01:56.352 SYMLINK libspdk_rpc.so 00:01:56.611 CC lib/keyring/keyring_rpc.o 00:01:56.611 CC lib/keyring/keyring.o 00:01:56.611 CC lib/trace/trace.o 00:01:56.611 CC lib/trace/trace_rpc.o 00:01:56.611 CC lib/trace/trace_flags.o 00:01:56.611 CC lib/notify/notify.o 00:01:56.611 CC lib/notify/notify_rpc.o 00:01:56.871 LIB libspdk_keyring.a 00:01:56.871 LIB libspdk_notify.a 00:01:56.871 SO libspdk_keyring.so.2.0 00:01:56.871 SO libspdk_notify.so.6.0 00:01:56.871 LIB libspdk_trace.a 00:01:56.871 SYMLINK libspdk_notify.so 00:01:56.871 SYMLINK libspdk_keyring.so 00:01:56.871 SO libspdk_trace.so.11.0 00:01:56.871 SYMLINK libspdk_trace.so 00:01:57.438 CC lib/sock/sock_rpc.o 00:01:57.438 CC lib/sock/sock.o 00:01:57.438 CC lib/thread/thread.o 00:01:57.438 CC lib/thread/iobuf.o 00:01:57.698 LIB libspdk_sock.a 00:01:57.698 SO libspdk_sock.so.10.0 00:01:57.698 SYMLINK libspdk_sock.so 00:01:57.957 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:57.957 CC lib/nvme/nvme_ctrlr.o 00:01:57.957 CC lib/nvme/nvme_fabric.o 00:01:57.957 CC lib/nvme/nvme_ns_cmd.o 00:01:57.957 CC lib/nvme/nvme_ns.o 00:01:57.957 CC lib/nvme/nvme_pcie_common.o 00:01:57.957 CC lib/nvme/nvme_pcie.o 00:01:57.957 CC lib/nvme/nvme_qpair.o 00:01:57.957 CC lib/nvme/nvme.o 00:01:57.957 CC lib/nvme/nvme_quirks.o 00:01:57.957 CC lib/nvme/nvme_transport.o 00:01:57.957 CC lib/nvme/nvme_discovery.o 00:01:57.957 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:57.957 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:57.957 CC lib/nvme/nvme_tcp.o 00:01:57.957 CC lib/nvme/nvme_opal.o 00:01:57.957 CC lib/nvme/nvme_io_msg.o 00:01:57.957 CC lib/nvme/nvme_poll_group.o 00:01:57.957 CC lib/nvme/nvme_zns.o 00:01:57.957 CC lib/nvme/nvme_stubs.o 00:01:57.957 CC lib/nvme/nvme_auth.o 00:01:57.957 CC lib/nvme/nvme_cuse.o 00:01:57.957 CC lib/nvme/nvme_rdma.o 00:01:58.525 LIB libspdk_thread.a 00:01:58.525 SO libspdk_thread.so.11.0 00:01:58.525 SYMLINK libspdk_thread.so 00:01:58.784 CC lib/accel/accel_rpc.o 00:01:58.784 CC lib/accel/accel.o 00:01:58.784 CC lib/accel/accel_sw.o 00:01:58.784 CC lib/init/subsystem_rpc.o 00:01:58.784 CC lib/virtio/virtio.o 00:01:58.784 CC lib/init/json_config.o 00:01:58.784 CC lib/virtio/virtio_vfio_user.o 00:01:58.784 CC lib/virtio/virtio_pci.o 00:01:58.784 CC lib/init/subsystem.o 00:01:58.784 CC lib/virtio/virtio_vhost_user.o 00:01:58.784 CC lib/init/rpc.o 00:01:58.784 CC lib/blob/blobstore.o 00:01:58.784 CC lib/blob/request.o 00:01:58.784 CC lib/blob/blob_bs_dev.o 00:01:58.784 CC lib/blob/zeroes.o 00:01:58.784 CC lib/fsdev/fsdev.o 00:01:58.784 CC lib/fsdev/fsdev_io.o 00:01:58.784 CC lib/fsdev/fsdev_rpc.o 00:01:59.041 LIB libspdk_init.a 00:01:59.041 SO libspdk_init.so.6.0 00:01:59.300 LIB libspdk_virtio.a 00:01:59.300 SYMLINK libspdk_init.so 00:01:59.300 SO libspdk_virtio.so.7.0 00:01:59.300 SYMLINK libspdk_virtio.so 00:01:59.559 LIB libspdk_fsdev.a 00:01:59.559 CC lib/event/reactor.o 00:01:59.559 CC lib/event/log_rpc.o 00:01:59.559 CC lib/event/app.o 00:01:59.559 CC lib/event/app_rpc.o 00:01:59.559 CC lib/event/scheduler_static.o 00:01:59.559 SO libspdk_fsdev.so.1.0 00:01:59.559 SYMLINK libspdk_fsdev.so 00:01:59.559 LIB libspdk_accel.a 00:01:59.817 SO libspdk_accel.so.16.0 00:01:59.817 LIB libspdk_nvme.a 00:01:59.817 SYMLINK libspdk_accel.so 00:01:59.817 LIB libspdk_event.a 00:01:59.817 SO libspdk_event.so.14.0 00:01:59.817 SO libspdk_nvme.so.14.0 00:01:59.817 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:01:59.817 SYMLINK libspdk_event.so 00:02:00.077 SYMLINK libspdk_nvme.so 00:02:00.077 CC lib/bdev/bdev.o 00:02:00.077 CC lib/bdev/bdev_rpc.o 00:02:00.077 CC lib/bdev/bdev_zone.o 00:02:00.077 CC lib/bdev/part.o 00:02:00.077 CC lib/bdev/scsi_nvme.o 00:02:00.336 LIB libspdk_fuse_dispatcher.a 00:02:00.336 SO libspdk_fuse_dispatcher.so.1.0 00:02:00.594 SYMLINK libspdk_fuse_dispatcher.so 00:02:01.164 LIB libspdk_blob.a 00:02:01.164 SO libspdk_blob.so.11.0 00:02:01.164 SYMLINK libspdk_blob.so 00:02:01.423 CC lib/lvol/lvol.o 00:02:01.423 CC lib/blobfs/blobfs.o 00:02:01.423 CC lib/blobfs/tree.o 00:02:01.993 LIB libspdk_bdev.a 00:02:01.993 SO libspdk_bdev.so.17.0 00:02:02.252 LIB libspdk_blobfs.a 00:02:02.252 SYMLINK libspdk_bdev.so 00:02:02.252 SO libspdk_blobfs.so.10.0 00:02:02.252 LIB libspdk_lvol.a 00:02:02.252 SO libspdk_lvol.so.10.0 00:02:02.252 SYMLINK libspdk_blobfs.so 00:02:02.252 SYMLINK libspdk_lvol.so 00:02:02.518 CC lib/nbd/nbd_rpc.o 00:02:02.518 CC lib/nbd/nbd.o 00:02:02.518 CC lib/scsi/lun.o 00:02:02.518 CC lib/scsi/dev.o 00:02:02.518 CC lib/scsi/port.o 00:02:02.518 CC lib/scsi/scsi.o 00:02:02.518 CC lib/scsi/scsi_pr.o 00:02:02.518 CC lib/scsi/scsi_bdev.o 00:02:02.518 CC lib/scsi/scsi_rpc.o 00:02:02.518 CC lib/scsi/task.o 00:02:02.518 CC lib/ublk/ublk.o 00:02:02.518 CC lib/nvmf/ctrlr.o 00:02:02.518 CC lib/ublk/ublk_rpc.o 00:02:02.518 CC lib/ftl/ftl_core.o 00:02:02.518 CC lib/nvmf/subsystem.o 00:02:02.518 CC lib/nvmf/ctrlr_discovery.o 00:02:02.518 CC lib/ftl/ftl_init.o 00:02:02.518 CC lib/nvmf/ctrlr_bdev.o 00:02:02.518 CC lib/ftl/ftl_layout.o 00:02:02.518 CC lib/nvmf/nvmf.o 00:02:02.518 CC lib/ftl/ftl_debug.o 00:02:02.518 CC lib/nvmf/nvmf_rpc.o 00:02:02.518 CC lib/ftl/ftl_io.o 00:02:02.518 CC lib/nvmf/transport.o 00:02:02.518 CC lib/ftl/ftl_sb.o 00:02:02.518 CC lib/nvmf/tcp.o 00:02:02.518 CC lib/ftl/ftl_l2p.o 00:02:02.519 CC lib/nvmf/stubs.o 00:02:02.519 CC lib/ftl/ftl_l2p_flat.o 00:02:02.519 CC lib/nvmf/rdma.o 00:02:02.519 CC lib/nvmf/mdns_server.o 00:02:02.519 CC lib/ftl/ftl_nv_cache.o 00:02:02.519 CC lib/ftl/ftl_band.o 00:02:02.519 CC lib/nvmf/auth.o 00:02:02.519 CC lib/ftl/ftl_band_ops.o 00:02:02.519 CC lib/ftl/ftl_writer.o 00:02:02.519 CC lib/ftl/ftl_rq.o 00:02:02.519 CC lib/ftl/ftl_p2l.o 00:02:02.519 CC lib/ftl/ftl_l2p_cache.o 00:02:02.519 CC lib/ftl/ftl_reloc.o 00:02:02.519 CC lib/ftl/ftl_p2l_log.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:02.519 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:02.519 CC lib/ftl/utils/ftl_md.o 00:02:02.519 CC lib/ftl/utils/ftl_conf.o 00:02:02.519 CC lib/ftl/utils/ftl_mempool.o 00:02:02.519 CC lib/ftl/utils/ftl_property.o 00:02:02.519 CC lib/ftl/utils/ftl_bitmap.o 00:02:02.519 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:02.519 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:02.519 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:02.519 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:02.519 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:02.519 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:02.519 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:02.519 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:02.519 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:02.519 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:02.519 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:02.519 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:02.519 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:02.779 CC lib/ftl/base/ftl_base_dev.o 00:02:02.779 CC lib/ftl/base/ftl_base_bdev.o 00:02:02.779 CC lib/ftl/ftl_trace.o 00:02:03.347 LIB libspdk_nbd.a 00:02:03.347 SO libspdk_nbd.so.7.0 00:02:03.347 LIB libspdk_scsi.a 00:02:03.347 SYMLINK libspdk_nbd.so 00:02:03.347 SO libspdk_scsi.so.9.0 00:02:03.347 LIB libspdk_ublk.a 00:02:03.347 SO libspdk_ublk.so.3.0 00:02:03.347 SYMLINK libspdk_scsi.so 00:02:03.347 SYMLINK libspdk_ublk.so 00:02:03.605 LIB libspdk_ftl.a 00:02:03.605 CC lib/vhost/vhost.o 00:02:03.605 CC lib/vhost/vhost_scsi.o 00:02:03.605 CC lib/vhost/vhost_rpc.o 00:02:03.605 CC lib/vhost/vhost_blk.o 00:02:03.605 CC lib/vhost/rte_vhost_user.o 00:02:03.605 CC lib/iscsi/conn.o 00:02:03.605 CC lib/iscsi/param.o 00:02:03.605 CC lib/iscsi/init_grp.o 00:02:03.605 CC lib/iscsi/iscsi.o 00:02:03.605 CC lib/iscsi/portal_grp.o 00:02:03.605 CC lib/iscsi/tgt_node.o 00:02:03.606 CC lib/iscsi/iscsi_subsystem.o 00:02:03.606 CC lib/iscsi/iscsi_rpc.o 00:02:03.606 CC lib/iscsi/task.o 00:02:03.864 SO libspdk_ftl.so.9.0 00:02:04.121 SYMLINK libspdk_ftl.so 00:02:04.380 LIB libspdk_nvmf.a 00:02:04.380 SO libspdk_nvmf.so.20.0 00:02:04.380 LIB libspdk_vhost.a 00:02:04.638 SO libspdk_vhost.so.8.0 00:02:04.638 SYMLINK libspdk_nvmf.so 00:02:04.638 SYMLINK libspdk_vhost.so 00:02:04.638 LIB libspdk_iscsi.a 00:02:04.638 SO libspdk_iscsi.so.8.0 00:02:04.896 SYMLINK libspdk_iscsi.so 00:02:05.464 CC module/env_dpdk/env_dpdk_rpc.o 00:02:05.464 CC module/accel/error/accel_error.o 00:02:05.464 CC module/accel/ioat/accel_ioat.o 00:02:05.464 CC module/accel/ioat/accel_ioat_rpc.o 00:02:05.464 CC module/accel/error/accel_error_rpc.o 00:02:05.464 CC module/accel/dsa/accel_dsa.o 00:02:05.464 CC module/accel/dsa/accel_dsa_rpc.o 00:02:05.464 LIB libspdk_env_dpdk_rpc.a 00:02:05.464 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:05.464 CC module/fsdev/aio/fsdev_aio.o 00:02:05.464 CC module/fsdev/aio/linux_aio_mgr.o 00:02:05.464 CC module/blob/bdev/blob_bdev.o 00:02:05.464 CC module/accel/iaa/accel_iaa.o 00:02:05.464 CC module/sock/posix/posix.o 00:02:05.464 CC module/scheduler/gscheduler/gscheduler.o 00:02:05.464 CC module/accel/iaa/accel_iaa_rpc.o 00:02:05.464 CC module/keyring/file/keyring.o 00:02:05.464 CC module/keyring/file/keyring_rpc.o 00:02:05.464 CC module/keyring/linux/keyring_rpc.o 00:02:05.464 CC module/keyring/linux/keyring.o 00:02:05.464 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:05.464 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:05.464 SO libspdk_env_dpdk_rpc.so.6.0 00:02:05.722 SYMLINK libspdk_env_dpdk_rpc.so 00:02:05.722 LIB libspdk_accel_error.a 00:02:05.722 LIB libspdk_accel_ioat.a 00:02:05.722 LIB libspdk_keyring_linux.a 00:02:05.722 LIB libspdk_scheduler_gscheduler.a 00:02:05.722 LIB libspdk_keyring_file.a 00:02:05.722 SO libspdk_accel_error.so.2.0 00:02:05.722 SO libspdk_scheduler_gscheduler.so.4.0 00:02:05.722 SO libspdk_accel_ioat.so.6.0 00:02:05.722 LIB libspdk_scheduler_dpdk_governor.a 00:02:05.722 SO libspdk_keyring_linux.so.1.0 00:02:05.722 SO libspdk_keyring_file.so.2.0 00:02:05.722 LIB libspdk_accel_iaa.a 00:02:05.722 LIB libspdk_scheduler_dynamic.a 00:02:05.722 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:05.722 SYMLINK libspdk_accel_error.so 00:02:05.722 LIB libspdk_accel_dsa.a 00:02:05.722 SYMLINK libspdk_scheduler_gscheduler.so 00:02:05.722 SO libspdk_accel_iaa.so.3.0 00:02:05.722 LIB libspdk_blob_bdev.a 00:02:05.722 SYMLINK libspdk_accel_ioat.so 00:02:05.722 SO libspdk_scheduler_dynamic.so.4.0 00:02:05.722 SYMLINK libspdk_keyring_linux.so 00:02:05.722 SO libspdk_accel_dsa.so.5.0 00:02:05.722 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:05.722 SYMLINK libspdk_keyring_file.so 00:02:05.722 SO libspdk_blob_bdev.so.11.0 00:02:05.981 SYMLINK libspdk_scheduler_dynamic.so 00:02:05.981 SYMLINK libspdk_accel_iaa.so 00:02:05.981 SYMLINK libspdk_accel_dsa.so 00:02:05.981 SYMLINK libspdk_blob_bdev.so 00:02:05.981 LIB libspdk_fsdev_aio.a 00:02:06.239 SO libspdk_fsdev_aio.so.1.0 00:02:06.239 LIB libspdk_sock_posix.a 00:02:06.239 SYMLINK libspdk_fsdev_aio.so 00:02:06.239 SO libspdk_sock_posix.so.6.0 00:02:06.239 SYMLINK libspdk_sock_posix.so 00:02:06.239 CC module/blobfs/bdev/blobfs_bdev.o 00:02:06.239 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:06.239 CC module/bdev/error/vbdev_error.o 00:02:06.239 CC module/bdev/error/vbdev_error_rpc.o 00:02:06.239 CC module/bdev/null/bdev_null.o 00:02:06.239 CC module/bdev/null/bdev_null_rpc.o 00:02:06.239 CC module/bdev/malloc/bdev_malloc.o 00:02:06.239 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:06.239 CC module/bdev/split/vbdev_split.o 00:02:06.239 CC module/bdev/delay/vbdev_delay.o 00:02:06.239 CC module/bdev/split/vbdev_split_rpc.o 00:02:06.239 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:06.239 CC module/bdev/gpt/vbdev_gpt.o 00:02:06.239 CC module/bdev/gpt/gpt.o 00:02:06.239 CC module/bdev/lvol/vbdev_lvol.o 00:02:06.239 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:06.239 CC module/bdev/passthru/vbdev_passthru.o 00:02:06.239 CC module/bdev/nvme/bdev_nvme.o 00:02:06.239 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:06.239 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:06.239 CC module/bdev/aio/bdev_aio_rpc.o 00:02:06.239 CC module/bdev/aio/bdev_aio.o 00:02:06.239 CC module/bdev/nvme/nvme_rpc.o 00:02:06.239 CC module/bdev/nvme/bdev_mdns_client.o 00:02:06.239 CC module/bdev/nvme/vbdev_opal.o 00:02:06.239 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:06.239 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:06.239 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:06.239 CC module/bdev/ftl/bdev_ftl.o 00:02:06.239 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:06.239 CC module/bdev/iscsi/bdev_iscsi.o 00:02:06.239 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:06.239 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:06.498 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:06.498 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:06.498 CC module/bdev/raid/bdev_raid.o 00:02:06.498 CC module/bdev/raid/bdev_raid_rpc.o 00:02:06.498 CC module/bdev/raid/bdev_raid_sb.o 00:02:06.498 CC module/bdev/raid/raid0.o 00:02:06.498 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:06.498 CC module/bdev/raid/concat.o 00:02:06.498 CC module/bdev/raid/raid1.o 00:02:06.498 LIB libspdk_blobfs_bdev.a 00:02:06.498 SO libspdk_blobfs_bdev.so.6.0 00:02:06.498 LIB libspdk_bdev_split.a 00:02:06.498 LIB libspdk_bdev_error.a 00:02:06.498 LIB libspdk_bdev_null.a 00:02:06.498 SO libspdk_bdev_split.so.6.0 00:02:06.498 LIB libspdk_bdev_gpt.a 00:02:06.756 SO libspdk_bdev_error.so.6.0 00:02:06.756 SYMLINK libspdk_blobfs_bdev.so 00:02:06.756 SO libspdk_bdev_gpt.so.6.0 00:02:06.756 SO libspdk_bdev_null.so.6.0 00:02:06.756 LIB libspdk_bdev_passthru.a 00:02:06.756 SYMLINK libspdk_bdev_split.so 00:02:06.756 SYMLINK libspdk_bdev_error.so 00:02:06.756 LIB libspdk_bdev_delay.a 00:02:06.756 SYMLINK libspdk_bdev_null.so 00:02:06.756 SYMLINK libspdk_bdev_gpt.so 00:02:06.756 LIB libspdk_bdev_zone_block.a 00:02:06.756 LIB libspdk_bdev_aio.a 00:02:06.756 SO libspdk_bdev_passthru.so.6.0 00:02:06.756 LIB libspdk_bdev_iscsi.a 00:02:06.756 SO libspdk_bdev_delay.so.6.0 00:02:06.756 SO libspdk_bdev_zone_block.so.6.0 00:02:06.756 SO libspdk_bdev_aio.so.6.0 00:02:06.756 LIB libspdk_bdev_ftl.a 00:02:06.756 SYMLINK libspdk_bdev_passthru.so 00:02:06.756 SO libspdk_bdev_iscsi.so.6.0 00:02:06.756 SO libspdk_bdev_ftl.so.6.0 00:02:06.756 LIB libspdk_bdev_malloc.a 00:02:06.756 SYMLINK libspdk_bdev_delay.so 00:02:06.756 SYMLINK libspdk_bdev_zone_block.so 00:02:06.756 SYMLINK libspdk_bdev_aio.so 00:02:06.756 LIB libspdk_bdev_lvol.a 00:02:06.756 SO libspdk_bdev_malloc.so.6.0 00:02:06.756 SYMLINK libspdk_bdev_iscsi.so 00:02:06.756 SYMLINK libspdk_bdev_ftl.so 00:02:06.756 LIB libspdk_bdev_virtio.a 00:02:06.756 SO libspdk_bdev_lvol.so.6.0 00:02:07.015 SO libspdk_bdev_virtio.so.6.0 00:02:07.015 SYMLINK libspdk_bdev_malloc.so 00:02:07.015 SYMLINK libspdk_bdev_lvol.so 00:02:07.015 SYMLINK libspdk_bdev_virtio.so 00:02:07.274 LIB libspdk_bdev_raid.a 00:02:07.274 SO libspdk_bdev_raid.so.6.0 00:02:07.274 SYMLINK libspdk_bdev_raid.so 00:02:08.209 LIB libspdk_bdev_nvme.a 00:02:08.209 SO libspdk_bdev_nvme.so.7.0 00:02:08.209 SYMLINK libspdk_bdev_nvme.so 00:02:08.778 CC module/event/subsystems/fsdev/fsdev.o 00:02:08.778 CC module/event/subsystems/iobuf/iobuf.o 00:02:08.778 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:08.778 CC module/event/subsystems/vmd/vmd.o 00:02:08.778 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:08.778 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:08.778 CC module/event/subsystems/keyring/keyring.o 00:02:08.778 CC module/event/subsystems/sock/sock.o 00:02:08.778 CC module/event/subsystems/scheduler/scheduler.o 00:02:09.046 LIB libspdk_event_vhost_blk.a 00:02:09.046 LIB libspdk_event_sock.a 00:02:09.046 LIB libspdk_event_fsdev.a 00:02:09.046 LIB libspdk_event_iobuf.a 00:02:09.046 LIB libspdk_event_keyring.a 00:02:09.046 LIB libspdk_event_vmd.a 00:02:09.046 LIB libspdk_event_scheduler.a 00:02:09.047 SO libspdk_event_vhost_blk.so.3.0 00:02:09.047 SO libspdk_event_fsdev.so.1.0 00:02:09.047 SO libspdk_event_sock.so.5.0 00:02:09.047 SO libspdk_event_iobuf.so.3.0 00:02:09.047 SO libspdk_event_keyring.so.1.0 00:02:09.047 SO libspdk_event_vmd.so.6.0 00:02:09.047 SO libspdk_event_scheduler.so.4.0 00:02:09.047 SYMLINK libspdk_event_vhost_blk.so 00:02:09.047 SYMLINK libspdk_event_fsdev.so 00:02:09.047 SYMLINK libspdk_event_keyring.so 00:02:09.047 SYMLINK libspdk_event_sock.so 00:02:09.047 SYMLINK libspdk_event_iobuf.so 00:02:09.047 SYMLINK libspdk_event_vmd.so 00:02:09.047 SYMLINK libspdk_event_scheduler.so 00:02:09.306 CC module/event/subsystems/accel/accel.o 00:02:09.566 LIB libspdk_event_accel.a 00:02:09.566 SO libspdk_event_accel.so.6.0 00:02:09.566 SYMLINK libspdk_event_accel.so 00:02:10.133 CC module/event/subsystems/bdev/bdev.o 00:02:10.133 LIB libspdk_event_bdev.a 00:02:10.133 SO libspdk_event_bdev.so.6.0 00:02:10.133 SYMLINK libspdk_event_bdev.so 00:02:10.393 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:10.393 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:10.651 CC module/event/subsystems/scsi/scsi.o 00:02:10.651 CC module/event/subsystems/ublk/ublk.o 00:02:10.651 CC module/event/subsystems/nbd/nbd.o 00:02:10.651 LIB libspdk_event_scsi.a 00:02:10.651 LIB libspdk_event_nbd.a 00:02:10.651 LIB libspdk_event_ublk.a 00:02:10.652 SO libspdk_event_scsi.so.6.0 00:02:10.652 LIB libspdk_event_nvmf.a 00:02:10.652 SO libspdk_event_ublk.so.3.0 00:02:10.652 SO libspdk_event_nbd.so.6.0 00:02:10.652 SO libspdk_event_nvmf.so.6.0 00:02:10.910 SYMLINK libspdk_event_scsi.so 00:02:10.910 SYMLINK libspdk_event_ublk.so 00:02:10.910 SYMLINK libspdk_event_nbd.so 00:02:10.910 SYMLINK libspdk_event_nvmf.so 00:02:11.169 CC module/event/subsystems/iscsi/iscsi.o 00:02:11.169 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:11.169 LIB libspdk_event_iscsi.a 00:02:11.169 LIB libspdk_event_vhost_scsi.a 00:02:11.428 SO libspdk_event_iscsi.so.6.0 00:02:11.428 SO libspdk_event_vhost_scsi.so.3.0 00:02:11.428 SYMLINK libspdk_event_iscsi.so 00:02:11.428 SYMLINK libspdk_event_vhost_scsi.so 00:02:11.686 SO libspdk.so.6.0 00:02:11.686 SYMLINK libspdk.so 00:02:11.947 CC app/spdk_top/spdk_top.o 00:02:11.947 TEST_HEADER include/spdk/accel.h 00:02:11.947 TEST_HEADER include/spdk/accel_module.h 00:02:11.947 TEST_HEADER include/spdk/barrier.h 00:02:11.947 TEST_HEADER include/spdk/base64.h 00:02:11.947 TEST_HEADER include/spdk/assert.h 00:02:11.947 TEST_HEADER include/spdk/bdev.h 00:02:11.947 CC app/trace_record/trace_record.o 00:02:11.947 TEST_HEADER include/spdk/bdev_module.h 00:02:11.947 CXX app/trace/trace.o 00:02:11.947 TEST_HEADER include/spdk/bdev_zone.h 00:02:11.947 CC test/rpc_client/rpc_client_test.o 00:02:11.947 TEST_HEADER include/spdk/bit_array.h 00:02:11.947 CC app/spdk_nvme_perf/perf.o 00:02:11.947 TEST_HEADER include/spdk/bit_pool.h 00:02:11.947 TEST_HEADER include/spdk/blob_bdev.h 00:02:11.947 TEST_HEADER include/spdk/blobfs.h 00:02:11.947 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:11.947 TEST_HEADER include/spdk/blob.h 00:02:11.947 TEST_HEADER include/spdk/config.h 00:02:11.947 TEST_HEADER include/spdk/cpuset.h 00:02:11.947 TEST_HEADER include/spdk/conf.h 00:02:11.947 TEST_HEADER include/spdk/crc32.h 00:02:11.947 CC app/spdk_nvme_discover/discovery_aer.o 00:02:11.947 TEST_HEADER include/spdk/crc64.h 00:02:11.947 CC app/spdk_lspci/spdk_lspci.o 00:02:11.947 TEST_HEADER include/spdk/dma.h 00:02:11.947 TEST_HEADER include/spdk/dif.h 00:02:11.947 TEST_HEADER include/spdk/endian.h 00:02:11.947 TEST_HEADER include/spdk/env_dpdk.h 00:02:11.947 CC app/spdk_nvme_identify/identify.o 00:02:11.947 TEST_HEADER include/spdk/crc16.h 00:02:11.947 TEST_HEADER include/spdk/env.h 00:02:11.947 TEST_HEADER include/spdk/event.h 00:02:11.947 TEST_HEADER include/spdk/fd_group.h 00:02:11.947 TEST_HEADER include/spdk/fd.h 00:02:11.947 TEST_HEADER include/spdk/fsdev.h 00:02:11.947 TEST_HEADER include/spdk/file.h 00:02:11.947 TEST_HEADER include/spdk/fsdev_module.h 00:02:11.947 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:11.947 TEST_HEADER include/spdk/ftl.h 00:02:11.947 TEST_HEADER include/spdk/gpt_spec.h 00:02:11.947 TEST_HEADER include/spdk/histogram_data.h 00:02:11.947 TEST_HEADER include/spdk/hexlify.h 00:02:11.947 TEST_HEADER include/spdk/idxd_spec.h 00:02:11.947 TEST_HEADER include/spdk/idxd.h 00:02:11.947 TEST_HEADER include/spdk/init.h 00:02:11.947 TEST_HEADER include/spdk/ioat.h 00:02:11.947 TEST_HEADER include/spdk/ioat_spec.h 00:02:11.947 TEST_HEADER include/spdk/iscsi_spec.h 00:02:11.947 TEST_HEADER include/spdk/json.h 00:02:11.947 TEST_HEADER include/spdk/keyring.h 00:02:11.947 TEST_HEADER include/spdk/jsonrpc.h 00:02:11.947 TEST_HEADER include/spdk/keyring_module.h 00:02:11.947 TEST_HEADER include/spdk/likely.h 00:02:11.947 TEST_HEADER include/spdk/lvol.h 00:02:11.947 TEST_HEADER include/spdk/log.h 00:02:11.947 TEST_HEADER include/spdk/md5.h 00:02:11.947 TEST_HEADER include/spdk/mmio.h 00:02:11.947 TEST_HEADER include/spdk/memory.h 00:02:11.947 TEST_HEADER include/spdk/nbd.h 00:02:11.947 TEST_HEADER include/spdk/net.h 00:02:11.947 TEST_HEADER include/spdk/notify.h 00:02:11.947 TEST_HEADER include/spdk/nvme_intel.h 00:02:11.947 TEST_HEADER include/spdk/nvme.h 00:02:11.947 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:11.947 TEST_HEADER include/spdk/nvme_spec.h 00:02:11.947 CC app/nvmf_tgt/nvmf_main.o 00:02:11.947 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:11.947 TEST_HEADER include/spdk/nvme_zns.h 00:02:11.947 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:11.947 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:11.947 TEST_HEADER include/spdk/nvmf.h 00:02:11.947 TEST_HEADER include/spdk/nvmf_spec.h 00:02:11.947 TEST_HEADER include/spdk/nvmf_transport.h 00:02:11.947 TEST_HEADER include/spdk/opal.h 00:02:11.947 TEST_HEADER include/spdk/opal_spec.h 00:02:11.947 TEST_HEADER include/spdk/pipe.h 00:02:11.947 TEST_HEADER include/spdk/pci_ids.h 00:02:11.947 TEST_HEADER include/spdk/queue.h 00:02:11.947 TEST_HEADER include/spdk/rpc.h 00:02:11.947 TEST_HEADER include/spdk/reduce.h 00:02:11.947 TEST_HEADER include/spdk/scheduler.h 00:02:11.947 TEST_HEADER include/spdk/scsi.h 00:02:11.947 TEST_HEADER include/spdk/scsi_spec.h 00:02:11.947 TEST_HEADER include/spdk/sock.h 00:02:11.947 TEST_HEADER include/spdk/stdinc.h 00:02:11.947 TEST_HEADER include/spdk/string.h 00:02:11.947 TEST_HEADER include/spdk/thread.h 00:02:11.947 TEST_HEADER include/spdk/trace.h 00:02:11.947 CC app/spdk_dd/spdk_dd.o 00:02:11.947 TEST_HEADER include/spdk/trace_parser.h 00:02:11.947 TEST_HEADER include/spdk/tree.h 00:02:11.947 TEST_HEADER include/spdk/ublk.h 00:02:11.947 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:11.947 TEST_HEADER include/spdk/util.h 00:02:11.947 TEST_HEADER include/spdk/uuid.h 00:02:11.947 TEST_HEADER include/spdk/version.h 00:02:11.947 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:11.947 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:11.947 TEST_HEADER include/spdk/vhost.h 00:02:11.947 TEST_HEADER include/spdk/vmd.h 00:02:11.947 TEST_HEADER include/spdk/xor.h 00:02:11.947 TEST_HEADER include/spdk/zipf.h 00:02:11.947 CXX test/cpp_headers/accel.o 00:02:11.947 CXX test/cpp_headers/accel_module.o 00:02:11.947 CXX test/cpp_headers/assert.o 00:02:11.947 CXX test/cpp_headers/barrier.o 00:02:11.947 CXX test/cpp_headers/base64.o 00:02:11.947 CXX test/cpp_headers/bdev.o 00:02:11.947 CXX test/cpp_headers/bdev_module.o 00:02:11.947 CXX test/cpp_headers/bdev_zone.o 00:02:11.947 CXX test/cpp_headers/bit_array.o 00:02:11.947 CXX test/cpp_headers/bit_pool.o 00:02:11.947 CXX test/cpp_headers/blob_bdev.o 00:02:11.947 CXX test/cpp_headers/blobfs_bdev.o 00:02:11.947 CXX test/cpp_headers/conf.o 00:02:11.947 CXX test/cpp_headers/blobfs.o 00:02:11.947 CXX test/cpp_headers/blob.o 00:02:11.947 CC app/iscsi_tgt/iscsi_tgt.o 00:02:11.947 CXX test/cpp_headers/config.o 00:02:11.947 CXX test/cpp_headers/cpuset.o 00:02:11.947 CXX test/cpp_headers/crc16.o 00:02:11.947 CXX test/cpp_headers/crc32.o 00:02:11.947 CXX test/cpp_headers/crc64.o 00:02:11.947 CXX test/cpp_headers/dif.o 00:02:11.947 CXX test/cpp_headers/dma.o 00:02:11.947 CXX test/cpp_headers/endian.o 00:02:11.947 CXX test/cpp_headers/env_dpdk.o 00:02:11.947 CXX test/cpp_headers/env.o 00:02:11.947 CXX test/cpp_headers/event.o 00:02:11.947 CXX test/cpp_headers/fd.o 00:02:11.947 CXX test/cpp_headers/fd_group.o 00:02:11.947 CXX test/cpp_headers/file.o 00:02:11.947 CXX test/cpp_headers/fsdev_module.o 00:02:11.947 CXX test/cpp_headers/fsdev.o 00:02:11.947 CC app/spdk_tgt/spdk_tgt.o 00:02:11.947 CXX test/cpp_headers/ftl.o 00:02:11.947 CXX test/cpp_headers/fuse_dispatcher.o 00:02:11.947 CXX test/cpp_headers/gpt_spec.o 00:02:11.947 CXX test/cpp_headers/hexlify.o 00:02:11.947 CXX test/cpp_headers/idxd.o 00:02:11.947 CXX test/cpp_headers/histogram_data.o 00:02:11.947 CXX test/cpp_headers/idxd_spec.o 00:02:11.947 CXX test/cpp_headers/ioat.o 00:02:11.947 CXX test/cpp_headers/init.o 00:02:11.947 CXX test/cpp_headers/iscsi_spec.o 00:02:11.947 CXX test/cpp_headers/ioat_spec.o 00:02:11.947 CXX test/cpp_headers/json.o 00:02:11.947 CC examples/ioat/perf/perf.o 00:02:11.947 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:11.947 CC test/thread/poller_perf/poller_perf.o 00:02:11.947 CC examples/util/zipf/zipf.o 00:02:11.947 CC test/env/memory/memory_ut.o 00:02:12.211 CC test/env/pci/pci_ut.o 00:02:12.211 CC test/app/jsoncat/jsoncat.o 00:02:12.211 CC examples/ioat/verify/verify.o 00:02:12.211 CC test/env/vtophys/vtophys.o 00:02:12.211 CC test/app/histogram_perf/histogram_perf.o 00:02:12.211 CC test/app/stub/stub.o 00:02:12.211 CC app/fio/nvme/fio_plugin.o 00:02:12.211 CC app/fio/bdev/fio_plugin.o 00:02:12.211 CC test/dma/test_dma/test_dma.o 00:02:12.211 CC test/app/bdev_svc/bdev_svc.o 00:02:12.211 LINK spdk_lspci 00:02:12.473 LINK spdk_nvme_discover 00:02:12.473 CC test/env/mem_callbacks/mem_callbacks.o 00:02:12.473 LINK nvmf_tgt 00:02:12.473 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:12.473 LINK spdk_trace_record 00:02:12.473 LINK rpc_client_test 00:02:12.473 LINK interrupt_tgt 00:02:12.473 LINK zipf 00:02:12.473 LINK iscsi_tgt 00:02:12.473 CXX test/cpp_headers/jsonrpc.o 00:02:12.473 LINK env_dpdk_post_init 00:02:12.473 CXX test/cpp_headers/keyring.o 00:02:12.473 CXX test/cpp_headers/keyring_module.o 00:02:12.473 LINK spdk_tgt 00:02:12.473 LINK poller_perf 00:02:12.473 CXX test/cpp_headers/likely.o 00:02:12.473 CXX test/cpp_headers/log.o 00:02:12.473 CXX test/cpp_headers/lvol.o 00:02:12.473 CXX test/cpp_headers/md5.o 00:02:12.473 CXX test/cpp_headers/memory.o 00:02:12.473 CXX test/cpp_headers/mmio.o 00:02:12.473 CXX test/cpp_headers/nbd.o 00:02:12.473 CXX test/cpp_headers/net.o 00:02:12.473 CXX test/cpp_headers/notify.o 00:02:12.473 CXX test/cpp_headers/nvme.o 00:02:12.473 CXX test/cpp_headers/nvme_intel.o 00:02:12.473 CXX test/cpp_headers/nvme_ocssd.o 00:02:12.473 LINK jsoncat 00:02:12.473 LINK ioat_perf 00:02:12.473 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:12.473 LINK histogram_perf 00:02:12.473 LINK vtophys 00:02:12.473 CXX test/cpp_headers/nvme_spec.o 00:02:12.473 CXX test/cpp_headers/nvme_zns.o 00:02:12.473 LINK verify 00:02:12.473 CXX test/cpp_headers/nvmf_cmd.o 00:02:12.473 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:12.473 CXX test/cpp_headers/nvmf.o 00:02:12.473 CXX test/cpp_headers/nvmf_transport.o 00:02:12.473 CXX test/cpp_headers/nvmf_spec.o 00:02:12.473 CXX test/cpp_headers/opal.o 00:02:12.732 LINK stub 00:02:12.732 CXX test/cpp_headers/opal_spec.o 00:02:12.732 CXX test/cpp_headers/pci_ids.o 00:02:12.732 CXX test/cpp_headers/pipe.o 00:02:12.732 CXX test/cpp_headers/queue.o 00:02:12.733 CXX test/cpp_headers/scheduler.o 00:02:12.733 CXX test/cpp_headers/rpc.o 00:02:12.733 CXX test/cpp_headers/reduce.o 00:02:12.733 CXX test/cpp_headers/scsi.o 00:02:12.733 CXX test/cpp_headers/scsi_spec.o 00:02:12.733 CXX test/cpp_headers/sock.o 00:02:12.733 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:12.733 CXX test/cpp_headers/stdinc.o 00:02:12.733 CXX test/cpp_headers/string.o 00:02:12.733 CXX test/cpp_headers/thread.o 00:02:12.733 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:12.733 CXX test/cpp_headers/trace.o 00:02:12.733 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:12.733 CXX test/cpp_headers/trace_parser.o 00:02:12.733 CXX test/cpp_headers/tree.o 00:02:12.733 CXX test/cpp_headers/ublk.o 00:02:12.733 CXX test/cpp_headers/util.o 00:02:12.733 CXX test/cpp_headers/uuid.o 00:02:12.733 CXX test/cpp_headers/version.o 00:02:12.733 CXX test/cpp_headers/vfio_user_pci.o 00:02:12.733 LINK bdev_svc 00:02:12.733 LINK spdk_dd 00:02:12.733 LINK spdk_trace 00:02:12.733 CXX test/cpp_headers/vfio_user_spec.o 00:02:12.733 CXX test/cpp_headers/vhost.o 00:02:12.733 CXX test/cpp_headers/vmd.o 00:02:12.733 CXX test/cpp_headers/xor.o 00:02:12.733 CXX test/cpp_headers/zipf.o 00:02:12.994 LINK pci_ut 00:02:12.994 LINK spdk_nvme 00:02:12.994 LINK nvme_fuzz 00:02:12.994 LINK spdk_bdev 00:02:12.994 CC examples/vmd/led/led.o 00:02:12.994 CC examples/vmd/lsvmd/lsvmd.o 00:02:12.994 CC examples/sock/hello_world/hello_sock.o 00:02:12.994 CC examples/idxd/perf/perf.o 00:02:12.994 CC test/event/event_perf/event_perf.o 00:02:12.994 CC test/event/reactor_perf/reactor_perf.o 00:02:12.994 LINK test_dma 00:02:12.994 CC test/event/reactor/reactor.o 00:02:12.994 CC test/event/app_repeat/app_repeat.o 00:02:12.994 CC examples/thread/thread/thread_ex.o 00:02:13.252 LINK spdk_nvme_perf 00:02:13.253 CC test/event/scheduler/scheduler.o 00:02:13.253 LINK mem_callbacks 00:02:13.253 LINK spdk_nvme_identify 00:02:13.253 LINK led 00:02:13.253 LINK lsvmd 00:02:13.253 LINK reactor_perf 00:02:13.253 LINK event_perf 00:02:13.253 LINK reactor 00:02:13.253 CC app/vhost/vhost.o 00:02:13.253 LINK vhost_fuzz 00:02:13.253 LINK app_repeat 00:02:13.253 LINK hello_sock 00:02:13.253 LINK spdk_top 00:02:13.253 LINK scheduler 00:02:13.253 LINK idxd_perf 00:02:13.253 LINK thread 00:02:13.511 LINK vhost 00:02:13.511 LINK memory_ut 00:02:13.511 CC test/nvme/startup/startup.o 00:02:13.511 CC test/nvme/err_injection/err_injection.o 00:02:13.511 CC test/nvme/boot_partition/boot_partition.o 00:02:13.511 CC test/nvme/sgl/sgl.o 00:02:13.511 CC test/nvme/overhead/overhead.o 00:02:13.511 CC test/nvme/cuse/cuse.o 00:02:13.511 CC test/nvme/reset/reset.o 00:02:13.511 CC test/nvme/aer/aer.o 00:02:13.511 CC test/nvme/e2edp/nvme_dp.o 00:02:13.511 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:13.511 CC test/nvme/compliance/nvme_compliance.o 00:02:13.511 CC test/nvme/connect_stress/connect_stress.o 00:02:13.511 CC test/nvme/fused_ordering/fused_ordering.o 00:02:13.511 CC test/nvme/reserve/reserve.o 00:02:13.511 CC test/nvme/fdp/fdp.o 00:02:13.511 CC test/nvme/simple_copy/simple_copy.o 00:02:13.511 CC test/blobfs/mkfs/mkfs.o 00:02:13.511 CC test/accel/dif/dif.o 00:02:13.769 CC test/lvol/esnap/esnap.o 00:02:13.769 CC examples/nvme/abort/abort.o 00:02:13.769 CC examples/nvme/reconnect/reconnect.o 00:02:13.769 CC examples/nvme/hello_world/hello_world.o 00:02:13.769 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:13.769 CC examples/nvme/arbitration/arbitration.o 00:02:13.769 CC examples/nvme/hotplug/hotplug.o 00:02:13.769 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.769 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:13.769 LINK startup 00:02:13.769 LINK boot_partition 00:02:13.769 LINK err_injection 00:02:13.769 LINK connect_stress 00:02:13.769 LINK doorbell_aers 00:02:13.769 LINK mkfs 00:02:13.769 CC examples/accel/perf/accel_perf.o 00:02:13.769 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:13.769 CC examples/blob/cli/blobcli.o 00:02:13.769 LINK reserve 00:02:13.769 LINK fused_ordering 00:02:13.769 LINK simple_copy 00:02:13.769 LINK reset 00:02:13.769 CC examples/blob/hello_world/hello_blob.o 00:02:13.769 LINK nvme_dp 00:02:13.769 LINK sgl 00:02:13.769 LINK aer 00:02:13.769 LINK overhead 00:02:13.769 LINK pmr_persistence 00:02:14.028 LINK nvme_compliance 00:02:14.028 LINK cmb_copy 00:02:14.028 LINK fdp 00:02:14.028 LINK hello_world 00:02:14.028 LINK hotplug 00:02:14.028 LINK arbitration 00:02:14.028 LINK reconnect 00:02:14.028 LINK abort 00:02:14.028 LINK hello_fsdev 00:02:14.028 LINK hello_blob 00:02:14.028 LINK nvme_manage 00:02:14.287 LINK dif 00:02:14.287 LINK accel_perf 00:02:14.287 LINK blobcli 00:02:14.287 LINK iscsi_fuzz 00:02:14.546 LINK cuse 00:02:14.805 CC test/bdev/bdevio/bdevio.o 00:02:14.806 CC examples/bdev/hello_world/hello_bdev.o 00:02:14.806 CC examples/bdev/bdevperf/bdevperf.o 00:02:15.065 LINK hello_bdev 00:02:15.065 LINK bdevio 00:02:15.323 LINK bdevperf 00:02:15.890 CC examples/nvmf/nvmf/nvmf.o 00:02:16.149 LINK nvmf 00:02:17.527 LINK esnap 00:02:17.527 00:02:17.527 real 0m57.222s 00:02:17.527 user 8m4.743s 00:02:17.527 sys 3m19.303s 00:02:17.527 17:26:55 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:17.527 17:26:55 make -- common/autotest_common.sh@10 -- $ set +x 00:02:17.527 ************************************ 00:02:17.527 END TEST make 00:02:17.527 ************************************ 00:02:17.527 17:26:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:17.527 17:26:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:17.527 17:26:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:17.527 17:26:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.527 17:26:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:17.527 17:26:55 -- pm/common@44 -- $ pid=401139 00:02:17.527 17:26:55 -- pm/common@50 -- $ kill -TERM 401139 00:02:17.527 17:26:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.527 17:26:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:17.527 17:26:55 -- pm/common@44 -- $ pid=401141 00:02:17.527 17:26:55 -- pm/common@50 -- $ kill -TERM 401141 00:02:17.527 17:26:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.527 17:26:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:17.527 17:26:55 -- pm/common@44 -- $ pid=401142 00:02:17.527 17:26:55 -- pm/common@50 -- $ kill -TERM 401142 00:02:17.527 17:26:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.527 17:26:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:17.527 17:26:55 -- pm/common@44 -- $ pid=401169 00:02:17.527 17:26:55 -- pm/common@50 -- $ sudo -E kill -TERM 401169 00:02:17.787 17:26:56 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:17.787 17:26:56 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:17.787 17:26:56 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:17.787 17:26:56 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:17.787 17:26:56 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:17.787 17:26:56 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:17.787 17:26:56 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:17.787 17:26:56 -- scripts/common.sh@336 -- # IFS=.-: 00:02:17.787 17:26:56 -- scripts/common.sh@336 -- # read -ra ver1 00:02:17.787 17:26:56 -- scripts/common.sh@337 -- # IFS=.-: 00:02:17.787 17:26:56 -- scripts/common.sh@337 -- # read -ra ver2 00:02:17.787 17:26:56 -- scripts/common.sh@338 -- # local 'op=<' 00:02:17.787 17:26:56 -- scripts/common.sh@340 -- # ver1_l=2 00:02:17.787 17:26:56 -- scripts/common.sh@341 -- # ver2_l=1 00:02:17.787 17:26:56 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:17.787 17:26:56 -- scripts/common.sh@344 -- # case "$op" in 00:02:17.787 17:26:56 -- scripts/common.sh@345 -- # : 1 00:02:17.787 17:26:56 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:17.787 17:26:56 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:17.787 17:26:56 -- scripts/common.sh@365 -- # decimal 1 00:02:17.787 17:26:56 -- scripts/common.sh@353 -- # local d=1 00:02:17.787 17:26:56 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:17.787 17:26:56 -- scripts/common.sh@355 -- # echo 1 00:02:17.787 17:26:56 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:17.787 17:26:56 -- scripts/common.sh@366 -- # decimal 2 00:02:17.787 17:26:56 -- scripts/common.sh@353 -- # local d=2 00:02:17.787 17:26:56 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:17.787 17:26:56 -- scripts/common.sh@355 -- # echo 2 00:02:17.787 17:26:56 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:17.787 17:26:56 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:17.787 17:26:56 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:17.787 17:26:56 -- scripts/common.sh@368 -- # return 0 00:02:17.787 17:26:56 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:17.787 17:26:56 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:17.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:17.787 --rc genhtml_branch_coverage=1 00:02:17.787 --rc genhtml_function_coverage=1 00:02:17.787 --rc genhtml_legend=1 00:02:17.787 --rc geninfo_all_blocks=1 00:02:17.787 --rc geninfo_unexecuted_blocks=1 00:02:17.787 00:02:17.787 ' 00:02:17.787 17:26:56 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:17.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:17.787 --rc genhtml_branch_coverage=1 00:02:17.787 --rc genhtml_function_coverage=1 00:02:17.787 --rc genhtml_legend=1 00:02:17.787 --rc geninfo_all_blocks=1 00:02:17.787 --rc geninfo_unexecuted_blocks=1 00:02:17.787 00:02:17.787 ' 00:02:17.787 17:26:56 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:17.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:17.787 --rc genhtml_branch_coverage=1 00:02:17.787 --rc genhtml_function_coverage=1 00:02:17.787 --rc genhtml_legend=1 00:02:17.787 --rc geninfo_all_blocks=1 00:02:17.787 --rc geninfo_unexecuted_blocks=1 00:02:17.787 00:02:17.787 ' 00:02:17.788 17:26:56 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:17.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:17.788 --rc genhtml_branch_coverage=1 00:02:17.788 --rc genhtml_function_coverage=1 00:02:17.788 --rc genhtml_legend=1 00:02:17.788 --rc geninfo_all_blocks=1 00:02:17.788 --rc geninfo_unexecuted_blocks=1 00:02:17.788 00:02:17.788 ' 00:02:17.788 17:26:56 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:02:17.788 17:26:56 -- nvmf/common.sh@7 -- # uname -s 00:02:17.788 17:26:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:17.788 17:26:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:17.788 17:26:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:17.788 17:26:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:17.788 17:26:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:17.788 17:26:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:17.788 17:26:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:17.788 17:26:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:17.788 17:26:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:17.788 17:26:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:17.788 17:26:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:02:17.788 17:26:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:02:17.788 17:26:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:17.788 17:26:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:17.788 17:26:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:17.788 17:26:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:17.788 17:26:56 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:17.788 17:26:56 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:17.788 17:26:56 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:17.788 17:26:56 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.788 17:26:56 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.788 17:26:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.788 17:26:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.788 17:26:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.788 17:26:56 -- paths/export.sh@5 -- # export PATH 00:02:17.788 17:26:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.788 17:26:56 -- nvmf/common.sh@51 -- # : 0 00:02:17.788 17:26:56 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:17.788 17:26:56 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:17.788 17:26:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:17.788 17:26:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:17.788 17:26:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:17.788 17:26:56 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:17.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:17.788 17:26:56 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:17.788 17:26:56 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:17.788 17:26:56 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:17.788 17:26:56 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:17.788 17:26:56 -- spdk/autotest.sh@32 -- # uname -s 00:02:18.047 17:26:56 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:18.047 17:26:56 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:18.047 17:26:56 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:18.047 17:26:56 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:18.047 17:26:56 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:02:18.047 17:26:56 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:18.047 17:26:56 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:18.047 17:26:56 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:18.047 17:26:56 -- spdk/autotest.sh@48 -- # udevadm_pid=460981 00:02:18.047 17:26:56 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:18.047 17:26:56 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:18.047 17:26:56 -- pm/common@17 -- # local monitor 00:02:18.047 17:26:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.047 17:26:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.047 17:26:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.047 17:26:56 -- pm/common@21 -- # date +%s 00:02:18.047 17:26:56 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.047 17:26:56 -- pm/common@21 -- # date +%s 00:02:18.048 17:26:56 -- pm/common@25 -- # sleep 1 00:02:18.048 17:26:56 -- pm/common@21 -- # date +%s 00:02:18.048 17:26:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729178816 00:02:18.048 17:26:56 -- pm/common@21 -- # date +%s 00:02:18.048 17:26:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729178816 00:02:18.048 17:26:56 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729178816 00:02:18.048 17:26:56 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1729178816 00:02:18.048 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729178816_collect-vmstat.pm.log 00:02:18.048 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729178816_collect-cpu-load.pm.log 00:02:18.048 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729178816_collect-cpu-temp.pm.log 00:02:18.048 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1729178816_collect-bmc-pm.bmc.pm.log 00:02:19.000 17:26:57 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:19.000 17:26:57 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:19.000 17:26:57 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:19.000 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:02:19.000 17:26:57 -- spdk/autotest.sh@59 -- # create_test_list 00:02:19.000 17:26:57 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:19.000 17:26:57 -- common/autotest_common.sh@10 -- # set +x 00:02:19.000 17:26:57 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:02:19.000 17:26:57 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:19.000 17:26:57 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:19.000 17:26:57 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:19.000 17:26:57 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:19.000 17:26:57 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:19.000 17:26:57 -- common/autotest_common.sh@1455 -- # uname 00:02:19.000 17:26:57 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:19.000 17:26:57 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:19.000 17:26:57 -- common/autotest_common.sh@1475 -- # uname 00:02:19.000 17:26:57 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:19.000 17:26:57 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:19.000 17:26:57 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:19.000 lcov: LCOV version 1.15 00:02:19.000 17:26:57 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:02:33.878 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:33.878 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:46.143 17:27:22 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:46.143 17:27:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:46.143 17:27:22 -- common/autotest_common.sh@10 -- # set +x 00:02:46.143 17:27:22 -- spdk/autotest.sh@78 -- # rm -f 00:02:46.143 17:27:22 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.044 0000:5e:00.0 (144d a80a): Already using the nvme driver 00:02:48.044 0000:af:00.0 (8086 2701): Already using the nvme driver 00:02:48.044 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:48.044 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:48.044 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:48.044 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:48.044 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:48.044 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:48.044 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:48.303 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:48.303 0000:b0:00.0 (8086 2701): Already using the nvme driver 00:02:48.303 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:48.303 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:48.303 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:48.303 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:48.303 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:48.303 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:48.303 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:48.303 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:48.561 17:27:26 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:48.561 17:27:26 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:48.561 17:27:26 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:48.561 17:27:26 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:48.561 17:27:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:48.561 17:27:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:48.561 17:27:26 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:48.561 17:27:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.561 17:27:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:48.561 17:27:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:48.561 17:27:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:02:48.561 17:27:26 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:02:48.561 17:27:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:02:48.561 17:27:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:48.561 17:27:26 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:48.561 17:27:26 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:02:48.561 17:27:26 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:02:48.561 17:27:26 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:02:48.561 17:27:26 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:48.561 17:27:26 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:48.561 17:27:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.561 17:27:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:48.561 17:27:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:48.561 17:27:26 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:48.561 17:27:26 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:48.561 No valid GPT data, bailing 00:02:48.562 17:27:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:48.562 17:27:26 -- scripts/common.sh@394 -- # pt= 00:02:48.562 17:27:26 -- scripts/common.sh@395 -- # return 1 00:02:48.562 17:27:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:48.562 1+0 records in 00:02:48.562 1+0 records out 00:02:48.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00485367 s, 216 MB/s 00:02:48.562 17:27:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.562 17:27:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:48.562 17:27:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:02:48.562 17:27:26 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:02:48.562 17:27:26 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:02:48.562 No valid GPT data, bailing 00:02:48.562 17:27:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:02:48.562 17:27:26 -- scripts/common.sh@394 -- # pt= 00:02:48.562 17:27:26 -- scripts/common.sh@395 -- # return 1 00:02:48.562 17:27:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:02:48.562 1+0 records in 00:02:48.562 1+0 records out 00:02:48.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00459855 s, 228 MB/s 00:02:48.562 17:27:26 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.562 17:27:26 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:48.562 17:27:26 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:02:48.562 17:27:26 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:02:48.562 17:27:26 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:02:48.562 No valid GPT data, bailing 00:02:48.562 17:27:26 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:02:48.562 17:27:26 -- scripts/common.sh@394 -- # pt= 00:02:48.562 17:27:26 -- scripts/common.sh@395 -- # return 1 00:02:48.562 17:27:26 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:02:48.562 1+0 records in 00:02:48.562 1+0 records out 00:02:48.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00162638 s, 645 MB/s 00:02:48.562 17:27:26 -- spdk/autotest.sh@105 -- # sync 00:02:48.562 17:27:26 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:48.562 17:27:26 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:48.562 17:27:26 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:53.824 17:27:31 -- spdk/autotest.sh@111 -- # uname -s 00:02:53.824 17:27:31 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:53.824 17:27:31 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:53.824 17:27:31 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:56.364 Hugepages 00:02:56.364 node hugesize free / total 00:02:56.364 node0 1048576kB 0 / 0 00:02:56.364 node0 2048kB 0 / 0 00:02:56.364 node1 1048576kB 0 / 0 00:02:56.364 node1 2048kB 0 / 0 00:02:56.364 00:02:56.364 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.364 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:56.364 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:56.364 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:56.364 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:56.364 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:56.364 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:56.622 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:56.622 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:56.622 NVMe 0000:5e:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:56.622 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:56.622 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:56.622 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:56.622 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:56.622 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:56.622 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:56.622 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:56.622 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:56.882 NVMe 0000:af:00.0 8086 2701 1 nvme nvme1 nvme1n1 00:02:56.882 NVMe 0000:b0:00.0 8086 2701 1 nvme nvme2 nvme2n1 00:02:56.882 17:27:35 -- spdk/autotest.sh@117 -- # uname -s 00:02:56.882 17:27:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:56.882 17:27:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:56.882 17:27:35 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:00.164 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:03:00.165 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:03:00.165 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:00.165 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:01.541 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:01.541 17:27:39 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:02.916 17:27:40 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:02.916 17:27:40 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:02.916 17:27:40 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:02.916 17:27:40 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:02.916 17:27:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:02.916 17:27:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:02.916 17:27:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:02.916 17:27:40 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:02.916 17:27:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:02.916 17:27:41 -- common/autotest_common.sh@1498 -- # (( 3 == 0 )) 00:03:02.916 17:27:41 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:03:02.916 17:27:41 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.199 Waiting for block devices as requested 00:03:06.199 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:03:06.199 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:03:06.457 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:06.457 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:06.715 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:06.715 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:06.715 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:06.715 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:06.972 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:06.972 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:06.973 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:03:07.231 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:07.231 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:07.489 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:07.489 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:07.489 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:07.489 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:07.748 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:07.748 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:07.748 17:27:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:07.748 17:27:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:08.006 17:27:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:03:08.006 17:27:46 -- common/autotest_common.sh@1485 -- # grep 0000:5e:00.0/nvme/nvme 00:03:08.006 17:27:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:08.006 17:27:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:08.006 17:27:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:08.006 17:27:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:08.006 17:27:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:08.006 17:27:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:08.006 17:27:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:08.006 17:27:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:08.006 17:27:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:08.006 17:27:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:08.006 17:27:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:08.006 17:27:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:08.006 17:27:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:08.006 17:27:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:08.006 17:27:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:08.006 17:27:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:08.006 17:27:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:08.006 17:27:46 -- common/autotest_common.sh@1541 -- # continue 00:03:08.006 17:27:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:08.006 17:27:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:af:00.0 00:03:08.006 17:27:46 -- common/autotest_common.sh@1485 -- # grep 0000:af:00.0/nvme/nvme 00:03:08.006 17:27:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:03:08.006 17:27:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:03:08.007 17:27:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 ]] 00:03:08.007 17:27:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:ae/0000:ae:00.0/0000:af:00.0/nvme/nvme1 00:03:08.007 17:27:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:03:08.007 17:27:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:03:08.007 17:27:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:03:08.007 17:27:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:03:08.007 17:27:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:08.007 17:27:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:08.007 17:27:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x7' 00:03:08.007 17:27:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=0 00:03:08.007 17:27:46 -- common/autotest_common.sh@1532 -- # [[ 0 -ne 0 ]] 00:03:08.007 17:27:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:08.007 17:27:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:b0:00.0 00:03:08.007 17:27:46 -- common/autotest_common.sh@1485 -- # grep 0000:b0:00.0/nvme/nvme 00:03:08.007 17:27:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:03:08.007 17:27:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:03:08.007 17:27:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 ]] 00:03:08.007 17:27:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:ae/0000:ae:02.0/0000:b0:00.0/nvme/nvme2 00:03:08.007 17:27:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:03:08.007 17:27:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:03:08.007 17:27:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:03:08.007 17:27:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:03:08.007 17:27:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:08.007 17:27:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:08.007 17:27:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x7' 00:03:08.007 17:27:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=0 00:03:08.007 17:27:46 -- common/autotest_common.sh@1532 -- # [[ 0 -ne 0 ]] 00:03:08.007 17:27:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:08.007 17:27:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:08.007 17:27:46 -- common/autotest_common.sh@10 -- # set +x 00:03:08.007 17:27:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:08.007 17:27:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:08.007 17:27:46 -- common/autotest_common.sh@10 -- # set +x 00:03:08.007 17:27:46 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:03:11.310 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:03:11.569 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:03:11.569 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:03:11.569 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:11.569 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:11.827 17:27:50 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:11.827 17:27:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:11.827 17:27:50 -- common/autotest_common.sh@10 -- # set +x 00:03:11.827 17:27:50 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:11.827 17:27:50 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:11.827 17:27:50 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:11.827 17:27:50 -- common/autotest_common.sh@1561 -- # bdfs=() 00:03:11.827 17:27:50 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:03:11.827 17:27:50 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:03:11.828 17:27:50 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:03:11.828 17:27:50 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:03:11.828 17:27:50 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:11.828 17:27:50 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:11.828 17:27:50 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:11.828 17:27:50 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:11.828 17:27:50 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:12.086 17:27:50 -- common/autotest_common.sh@1498 -- # (( 3 == 0 )) 00:03:12.086 17:27:50 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:5e:00.0 0000:af:00.0 0000:b0:00.0 00:03:12.086 17:27:50 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:12.086 17:27:50 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:12.086 17:27:50 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:03:12.086 17:27:50 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:03:12.086 17:27:50 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:12.086 17:27:50 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:af:00.0/device 00:03:12.086 17:27:50 -- common/autotest_common.sh@1564 -- # device=0x2701 00:03:12.086 17:27:50 -- common/autotest_common.sh@1565 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:03:12.086 17:27:50 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:03:12.086 17:27:50 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:b0:00.0/device 00:03:12.086 17:27:50 -- common/autotest_common.sh@1564 -- # device=0x2701 00:03:12.086 17:27:50 -- common/autotest_common.sh@1565 -- # [[ 0x2701 == \0\x\0\a\5\4 ]] 00:03:12.086 17:27:50 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:03:12.086 17:27:50 -- common/autotest_common.sh@1570 -- # return 0 00:03:12.086 17:27:50 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:03:12.086 17:27:50 -- common/autotest_common.sh@1578 -- # return 0 00:03:12.086 17:27:50 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:12.086 17:27:50 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:12.086 17:27:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:12.086 17:27:50 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:12.086 17:27:50 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:12.086 17:27:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:12.086 17:27:50 -- common/autotest_common.sh@10 -- # set +x 00:03:12.086 17:27:50 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:12.086 17:27:50 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:12.086 17:27:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.087 17:27:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.087 17:27:50 -- common/autotest_common.sh@10 -- # set +x 00:03:12.087 ************************************ 00:03:12.087 START TEST env 00:03:12.087 ************************************ 00:03:12.087 17:27:50 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:03:12.087 * Looking for test storage... 00:03:12.087 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:03:12.087 17:27:50 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:12.087 17:27:50 env -- common/autotest_common.sh@1691 -- # lcov --version 00:03:12.087 17:27:50 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:12.345 17:27:50 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:12.345 17:27:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:12.345 17:27:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:12.345 17:27:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:12.345 17:27:50 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:12.345 17:27:50 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:12.345 17:27:50 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:12.345 17:27:50 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:12.345 17:27:50 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:12.345 17:27:50 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:12.345 17:27:50 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:12.346 17:27:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:12.346 17:27:50 env -- scripts/common.sh@344 -- # case "$op" in 00:03:12.346 17:27:50 env -- scripts/common.sh@345 -- # : 1 00:03:12.346 17:27:50 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:12.346 17:27:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:12.346 17:27:50 env -- scripts/common.sh@365 -- # decimal 1 00:03:12.346 17:27:50 env -- scripts/common.sh@353 -- # local d=1 00:03:12.346 17:27:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:12.346 17:27:50 env -- scripts/common.sh@355 -- # echo 1 00:03:12.346 17:27:50 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:12.346 17:27:50 env -- scripts/common.sh@366 -- # decimal 2 00:03:12.346 17:27:50 env -- scripts/common.sh@353 -- # local d=2 00:03:12.346 17:27:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:12.346 17:27:50 env -- scripts/common.sh@355 -- # echo 2 00:03:12.346 17:27:50 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:12.346 17:27:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:12.346 17:27:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:12.346 17:27:50 env -- scripts/common.sh@368 -- # return 0 00:03:12.346 17:27:50 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:12.346 17:27:50 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:12.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.346 --rc genhtml_branch_coverage=1 00:03:12.346 --rc genhtml_function_coverage=1 00:03:12.346 --rc genhtml_legend=1 00:03:12.346 --rc geninfo_all_blocks=1 00:03:12.346 --rc geninfo_unexecuted_blocks=1 00:03:12.346 00:03:12.346 ' 00:03:12.346 17:27:50 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:12.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.346 --rc genhtml_branch_coverage=1 00:03:12.346 --rc genhtml_function_coverage=1 00:03:12.346 --rc genhtml_legend=1 00:03:12.346 --rc geninfo_all_blocks=1 00:03:12.346 --rc geninfo_unexecuted_blocks=1 00:03:12.346 00:03:12.346 ' 00:03:12.346 17:27:50 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:12.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.346 --rc genhtml_branch_coverage=1 00:03:12.346 --rc genhtml_function_coverage=1 00:03:12.346 --rc genhtml_legend=1 00:03:12.346 --rc geninfo_all_blocks=1 00:03:12.346 --rc geninfo_unexecuted_blocks=1 00:03:12.346 00:03:12.346 ' 00:03:12.346 17:27:50 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:12.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:12.346 --rc genhtml_branch_coverage=1 00:03:12.346 --rc genhtml_function_coverage=1 00:03:12.346 --rc genhtml_legend=1 00:03:12.346 --rc geninfo_all_blocks=1 00:03:12.346 --rc geninfo_unexecuted_blocks=1 00:03:12.346 00:03:12.346 ' 00:03:12.346 17:27:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:12.346 17:27:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.346 17:27:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.346 17:27:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:12.346 ************************************ 00:03:12.346 START TEST env_memory 00:03:12.346 ************************************ 00:03:12.346 17:27:50 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:03:12.346 00:03:12.346 00:03:12.346 CUnit - A unit testing framework for C - Version 2.1-3 00:03:12.346 http://cunit.sourceforge.net/ 00:03:12.346 00:03:12.346 00:03:12.346 Suite: memory 00:03:12.346 Test: alloc and free memory map ...[2024-10-17 17:27:50.573562] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:12.346 passed 00:03:12.346 Test: mem map translation ...[2024-10-17 17:27:50.591884] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:12.346 [2024-10-17 17:27:50.591910] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:12.346 [2024-10-17 17:27:50.591944] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:12.346 [2024-10-17 17:27:50.591952] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:12.346 passed 00:03:12.346 Test: mem map registration ...[2024-10-17 17:27:50.627824] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:12.346 [2024-10-17 17:27:50.627840] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:12.346 passed 00:03:12.346 Test: mem map adjacent registrations ...passed 00:03:12.346 00:03:12.346 Run Summary: Type Total Ran Passed Failed Inactive 00:03:12.346 suites 1 1 n/a 0 0 00:03:12.346 tests 4 4 4 0 0 00:03:12.346 asserts 152 152 152 0 n/a 00:03:12.346 00:03:12.346 Elapsed time = 0.129 seconds 00:03:12.346 00:03:12.346 real 0m0.137s 00:03:12.346 user 0m0.125s 00:03:12.346 sys 0m0.012s 00:03:12.346 17:27:50 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:12.346 17:27:50 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:12.346 ************************************ 00:03:12.346 END TEST env_memory 00:03:12.346 ************************************ 00:03:12.346 17:27:50 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:12.346 17:27:50 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.346 17:27:50 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.346 17:27:50 env -- common/autotest_common.sh@10 -- # set +x 00:03:12.605 ************************************ 00:03:12.605 START TEST env_vtophys 00:03:12.605 ************************************ 00:03:12.605 17:27:50 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:12.605 EAL: lib.eal log level changed from notice to debug 00:03:12.605 EAL: Detected lcore 0 as core 0 on socket 0 00:03:12.605 EAL: Detected lcore 1 as core 1 on socket 0 00:03:12.605 EAL: Detected lcore 2 as core 2 on socket 0 00:03:12.605 EAL: Detected lcore 3 as core 3 on socket 0 00:03:12.605 EAL: Detected lcore 4 as core 4 on socket 0 00:03:12.605 EAL: Detected lcore 5 as core 8 on socket 0 00:03:12.605 EAL: Detected lcore 6 as core 9 on socket 0 00:03:12.605 EAL: Detected lcore 7 as core 10 on socket 0 00:03:12.605 EAL: Detected lcore 8 as core 11 on socket 0 00:03:12.605 EAL: Detected lcore 9 as core 16 on socket 0 00:03:12.605 EAL: Detected lcore 10 as core 17 on socket 0 00:03:12.605 EAL: Detected lcore 11 as core 18 on socket 0 00:03:12.605 EAL: Detected lcore 12 as core 19 on socket 0 00:03:12.605 EAL: Detected lcore 13 as core 20 on socket 0 00:03:12.605 EAL: Detected lcore 14 as core 24 on socket 0 00:03:12.605 EAL: Detected lcore 15 as core 25 on socket 0 00:03:12.605 EAL: Detected lcore 16 as core 26 on socket 0 00:03:12.605 EAL: Detected lcore 17 as core 27 on socket 0 00:03:12.605 EAL: Detected lcore 18 as core 0 on socket 1 00:03:12.605 EAL: Detected lcore 19 as core 1 on socket 1 00:03:12.605 EAL: Detected lcore 20 as core 2 on socket 1 00:03:12.605 EAL: Detected lcore 21 as core 3 on socket 1 00:03:12.605 EAL: Detected lcore 22 as core 4 on socket 1 00:03:12.605 EAL: Detected lcore 23 as core 8 on socket 1 00:03:12.605 EAL: Detected lcore 24 as core 9 on socket 1 00:03:12.605 EAL: Detected lcore 25 as core 10 on socket 1 00:03:12.605 EAL: Detected lcore 26 as core 11 on socket 1 00:03:12.605 EAL: Detected lcore 27 as core 16 on socket 1 00:03:12.605 EAL: Detected lcore 28 as core 17 on socket 1 00:03:12.605 EAL: Detected lcore 29 as core 18 on socket 1 00:03:12.605 EAL: Detected lcore 30 as core 19 on socket 1 00:03:12.605 EAL: Detected lcore 31 as core 20 on socket 1 00:03:12.605 EAL: Detected lcore 32 as core 24 on socket 1 00:03:12.605 EAL: Detected lcore 33 as core 25 on socket 1 00:03:12.605 EAL: Detected lcore 34 as core 26 on socket 1 00:03:12.605 EAL: Detected lcore 35 as core 27 on socket 1 00:03:12.605 EAL: Detected lcore 36 as core 0 on socket 0 00:03:12.605 EAL: Detected lcore 37 as core 1 on socket 0 00:03:12.605 EAL: Detected lcore 38 as core 2 on socket 0 00:03:12.605 EAL: Detected lcore 39 as core 3 on socket 0 00:03:12.605 EAL: Detected lcore 40 as core 4 on socket 0 00:03:12.605 EAL: Detected lcore 41 as core 8 on socket 0 00:03:12.605 EAL: Detected lcore 42 as core 9 on socket 0 00:03:12.605 EAL: Detected lcore 43 as core 10 on socket 0 00:03:12.605 EAL: Detected lcore 44 as core 11 on socket 0 00:03:12.605 EAL: Detected lcore 45 as core 16 on socket 0 00:03:12.605 EAL: Detected lcore 46 as core 17 on socket 0 00:03:12.605 EAL: Detected lcore 47 as core 18 on socket 0 00:03:12.605 EAL: Detected lcore 48 as core 19 on socket 0 00:03:12.605 EAL: Detected lcore 49 as core 20 on socket 0 00:03:12.605 EAL: Detected lcore 50 as core 24 on socket 0 00:03:12.605 EAL: Detected lcore 51 as core 25 on socket 0 00:03:12.605 EAL: Detected lcore 52 as core 26 on socket 0 00:03:12.605 EAL: Detected lcore 53 as core 27 on socket 0 00:03:12.605 EAL: Detected lcore 54 as core 0 on socket 1 00:03:12.605 EAL: Detected lcore 55 as core 1 on socket 1 00:03:12.605 EAL: Detected lcore 56 as core 2 on socket 1 00:03:12.605 EAL: Detected lcore 57 as core 3 on socket 1 00:03:12.605 EAL: Detected lcore 58 as core 4 on socket 1 00:03:12.605 EAL: Detected lcore 59 as core 8 on socket 1 00:03:12.605 EAL: Detected lcore 60 as core 9 on socket 1 00:03:12.605 EAL: Detected lcore 61 as core 10 on socket 1 00:03:12.605 EAL: Detected lcore 62 as core 11 on socket 1 00:03:12.605 EAL: Detected lcore 63 as core 16 on socket 1 00:03:12.605 EAL: Detected lcore 64 as core 17 on socket 1 00:03:12.605 EAL: Detected lcore 65 as core 18 on socket 1 00:03:12.605 EAL: Detected lcore 66 as core 19 on socket 1 00:03:12.605 EAL: Detected lcore 67 as core 20 on socket 1 00:03:12.605 EAL: Detected lcore 68 as core 24 on socket 1 00:03:12.605 EAL: Detected lcore 69 as core 25 on socket 1 00:03:12.605 EAL: Detected lcore 70 as core 26 on socket 1 00:03:12.605 EAL: Detected lcore 71 as core 27 on socket 1 00:03:12.605 EAL: Maximum logical cores by configuration: 128 00:03:12.605 EAL: Detected CPU lcores: 72 00:03:12.605 EAL: Detected NUMA nodes: 2 00:03:12.605 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:12.605 EAL: Detected shared linkage of DPDK 00:03:12.605 EAL: No shared files mode enabled, IPC will be disabled 00:03:12.605 EAL: Bus pci wants IOVA as 'DC' 00:03:12.605 EAL: Buses did not request a specific IOVA mode. 00:03:12.605 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:12.605 EAL: Selected IOVA mode 'VA' 00:03:12.605 EAL: Probing VFIO support... 00:03:12.605 EAL: IOMMU type 1 (Type 1) is supported 00:03:12.605 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:12.606 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:12.606 EAL: VFIO support initialized 00:03:12.606 EAL: Ask a virtual area of 0x2e000 bytes 00:03:12.606 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:12.606 EAL: Setting up physically contiguous memory... 00:03:12.606 EAL: Setting maximum number of open files to 524288 00:03:12.606 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:12.606 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:12.606 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:12.606 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.606 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:12.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.606 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.606 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:12.606 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:12.606 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.606 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:12.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.606 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.606 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:12.606 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:12.606 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.606 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:12.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.606 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.606 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:12.606 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:12.606 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.606 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:12.606 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:12.606 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.606 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:12.606 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:12.606 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:12.606 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.606 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:12.606 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.606 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.606 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:12.606 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:12.606 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.606 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:12.606 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.606 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.606 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:12.606 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:12.606 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.606 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:12.606 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.606 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.606 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:12.606 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:12.606 EAL: Ask a virtual area of 0x61000 bytes 00:03:12.606 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:12.606 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:12.606 EAL: Ask a virtual area of 0x400000000 bytes 00:03:12.606 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:12.606 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:12.606 EAL: Hugepages will be freed exactly as allocated. 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: TSC frequency is ~2300000 KHz 00:03:12.606 EAL: Main lcore 0 is ready (tid=7fbea069ca00;cpuset=[0]) 00:03:12.606 EAL: Trying to obtain current memory policy. 00:03:12.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.606 EAL: Restoring previous memory policy: 0 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was expanded by 2MB 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:12.606 EAL: Mem event callback 'spdk:(nil)' registered 00:03:12.606 00:03:12.606 00:03:12.606 CUnit - A unit testing framework for C - Version 2.1-3 00:03:12.606 http://cunit.sourceforge.net/ 00:03:12.606 00:03:12.606 00:03:12.606 Suite: components_suite 00:03:12.606 Test: vtophys_malloc_test ...passed 00:03:12.606 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:12.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.606 EAL: Restoring previous memory policy: 4 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was expanded by 4MB 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was shrunk by 4MB 00:03:12.606 EAL: Trying to obtain current memory policy. 00:03:12.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.606 EAL: Restoring previous memory policy: 4 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was expanded by 6MB 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was shrunk by 6MB 00:03:12.606 EAL: Trying to obtain current memory policy. 00:03:12.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.606 EAL: Restoring previous memory policy: 4 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was expanded by 10MB 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was shrunk by 10MB 00:03:12.606 EAL: Trying to obtain current memory policy. 00:03:12.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.606 EAL: Restoring previous memory policy: 4 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was expanded by 18MB 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was shrunk by 18MB 00:03:12.606 EAL: Trying to obtain current memory policy. 00:03:12.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.606 EAL: Restoring previous memory policy: 4 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was expanded by 34MB 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was shrunk by 34MB 00:03:12.606 EAL: Trying to obtain current memory policy. 00:03:12.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.606 EAL: Restoring previous memory policy: 4 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was expanded by 66MB 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was shrunk by 66MB 00:03:12.606 EAL: Trying to obtain current memory policy. 00:03:12.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.606 EAL: Restoring previous memory policy: 4 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was expanded by 130MB 00:03:12.606 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.606 EAL: request: mp_malloc_sync 00:03:12.606 EAL: No shared files mode enabled, IPC is disabled 00:03:12.606 EAL: Heap on socket 0 was shrunk by 130MB 00:03:12.606 EAL: Trying to obtain current memory policy. 00:03:12.606 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.864 EAL: Restoring previous memory policy: 4 00:03:12.864 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.864 EAL: request: mp_malloc_sync 00:03:12.864 EAL: No shared files mode enabled, IPC is disabled 00:03:12.864 EAL: Heap on socket 0 was expanded by 258MB 00:03:12.864 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.864 EAL: request: mp_malloc_sync 00:03:12.864 EAL: No shared files mode enabled, IPC is disabled 00:03:12.864 EAL: Heap on socket 0 was shrunk by 258MB 00:03:12.864 EAL: Trying to obtain current memory policy. 00:03:12.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:12.865 EAL: Restoring previous memory policy: 4 00:03:12.865 EAL: Calling mem event callback 'spdk:(nil)' 00:03:12.865 EAL: request: mp_malloc_sync 00:03:12.865 EAL: No shared files mode enabled, IPC is disabled 00:03:12.865 EAL: Heap on socket 0 was expanded by 514MB 00:03:13.122 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.122 EAL: request: mp_malloc_sync 00:03:13.122 EAL: No shared files mode enabled, IPC is disabled 00:03:13.122 EAL: Heap on socket 0 was shrunk by 514MB 00:03:13.122 EAL: Trying to obtain current memory policy. 00:03:13.122 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:13.380 EAL: Restoring previous memory policy: 4 00:03:13.380 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.380 EAL: request: mp_malloc_sync 00:03:13.380 EAL: No shared files mode enabled, IPC is disabled 00:03:13.380 EAL: Heap on socket 0 was expanded by 1026MB 00:03:13.637 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.637 EAL: request: mp_malloc_sync 00:03:13.637 EAL: No shared files mode enabled, IPC is disabled 00:03:13.637 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:13.637 passed 00:03:13.637 00:03:13.637 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.637 suites 1 1 n/a 0 0 00:03:13.637 tests 2 2 2 0 0 00:03:13.637 asserts 497 497 497 0 n/a 00:03:13.637 00:03:13.637 Elapsed time = 1.135 seconds 00:03:13.637 EAL: Calling mem event callback 'spdk:(nil)' 00:03:13.637 EAL: request: mp_malloc_sync 00:03:13.637 EAL: No shared files mode enabled, IPC is disabled 00:03:13.637 EAL: Heap on socket 0 was shrunk by 2MB 00:03:13.637 EAL: No shared files mode enabled, IPC is disabled 00:03:13.637 EAL: No shared files mode enabled, IPC is disabled 00:03:13.637 EAL: No shared files mode enabled, IPC is disabled 00:03:13.637 00:03:13.637 real 0m1.265s 00:03:13.637 user 0m0.737s 00:03:13.637 sys 0m0.502s 00:03:13.637 17:27:52 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:13.637 17:27:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:13.637 ************************************ 00:03:13.637 END TEST env_vtophys 00:03:13.637 ************************************ 00:03:13.895 17:27:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:13.895 17:27:52 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:13.895 17:27:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:13.895 17:27:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:13.895 ************************************ 00:03:13.895 START TEST env_pci 00:03:13.895 ************************************ 00:03:13.895 17:27:52 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:03:13.895 00:03:13.895 00:03:13.895 CUnit - A unit testing framework for C - Version 2.1-3 00:03:13.895 http://cunit.sourceforge.net/ 00:03:13.895 00:03:13.895 00:03:13.896 Suite: pci 00:03:13.896 Test: pci_hook ...[2024-10-17 17:27:52.110097] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 474886 has claimed it 00:03:13.896 EAL: Cannot find device (10000:00:01.0) 00:03:13.896 EAL: Failed to attach device on primary process 00:03:13.896 passed 00:03:13.896 00:03:13.896 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.896 suites 1 1 n/a 0 0 00:03:13.896 tests 1 1 1 0 0 00:03:13.896 asserts 25 25 25 0 n/a 00:03:13.896 00:03:13.896 Elapsed time = 0.033 seconds 00:03:13.896 00:03:13.896 real 0m0.055s 00:03:13.896 user 0m0.020s 00:03:13.896 sys 0m0.035s 00:03:13.896 17:27:52 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:13.896 17:27:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:13.896 ************************************ 00:03:13.896 END TEST env_pci 00:03:13.896 ************************************ 00:03:13.896 17:27:52 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:13.896 17:27:52 env -- env/env.sh@15 -- # uname 00:03:13.896 17:27:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:13.896 17:27:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:13.896 17:27:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:13.896 17:27:52 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:03:13.896 17:27:52 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:13.896 17:27:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:13.896 ************************************ 00:03:13.896 START TEST env_dpdk_post_init 00:03:13.896 ************************************ 00:03:13.896 17:27:52 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:13.896 EAL: Detected CPU lcores: 72 00:03:13.896 EAL: Detected NUMA nodes: 2 00:03:13.896 EAL: Detected shared linkage of DPDK 00:03:13.896 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:14.153 EAL: Selected IOVA mode 'VA' 00:03:14.153 EAL: VFIO support initialized 00:03:14.153 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:14.153 EAL: Using IOMMU type 1 (Type 1) 00:03:14.153 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:14.153 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:14.153 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:14.153 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:14.153 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:14.153 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:14.153 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:14.153 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:14.412 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:5e:00.0 (socket 0) 00:03:14.412 EAL: Ignore mapping IO port bar(1) 00:03:14.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:14.412 EAL: Ignore mapping IO port bar(1) 00:03:14.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:14.412 EAL: Ignore mapping IO port bar(1) 00:03:14.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:14.412 EAL: Ignore mapping IO port bar(1) 00:03:14.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:14.412 EAL: Ignore mapping IO port bar(1) 00:03:14.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:14.412 EAL: Ignore mapping IO port bar(1) 00:03:14.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:14.412 EAL: Ignore mapping IO port bar(1) 00:03:14.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:14.412 EAL: Ignore mapping IO port bar(1) 00:03:14.412 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:14.670 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:af:00.0 (socket 1) 00:03:14.928 EAL: Probe PCI driver: spdk_nvme (8086:2701) device: 0000:b0:00.0 (socket 1) 00:03:14.928 EAL: Releasing PCI mapped resource for 0000:b0:00.0 00:03:14.928 EAL: Calling pci_unmap_resource for 0000:b0:00.0 at 0x202001048000 00:03:15.187 EAL: Releasing PCI mapped resource for 0000:af:00.0 00:03:15.187 EAL: Calling pci_unmap_resource for 0000:af:00.0 at 0x202001044000 00:03:15.187 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:15.187 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:15.187 Starting DPDK initialization... 00:03:15.187 Starting SPDK post initialization... 00:03:15.187 SPDK NVMe probe 00:03:15.187 Attaching to 0000:5e:00.0 00:03:15.187 Attaching to 0000:af:00.0 00:03:15.187 Attaching to 0000:b0:00.0 00:03:15.187 Attached to 0000:af:00.0 00:03:15.187 Attached to 0000:b0:00.0 00:03:15.187 Attached to 0000:5e:00.0 00:03:15.187 Cleaning up... 00:03:15.187 00:03:15.187 real 0m1.343s 00:03:15.187 user 0m0.158s 00:03:15.187 sys 0m0.315s 00:03:15.187 17:27:53 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.187 17:27:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:15.187 ************************************ 00:03:15.187 END TEST env_dpdk_post_init 00:03:15.187 ************************************ 00:03:15.445 17:27:53 env -- env/env.sh@26 -- # uname 00:03:15.445 17:27:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:15.445 17:27:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:15.445 17:27:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.445 17:27:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.445 17:27:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:15.445 ************************************ 00:03:15.445 START TEST env_mem_callbacks 00:03:15.445 ************************************ 00:03:15.445 17:27:53 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:15.445 EAL: Detected CPU lcores: 72 00:03:15.445 EAL: Detected NUMA nodes: 2 00:03:15.445 EAL: Detected shared linkage of DPDK 00:03:15.445 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:15.445 EAL: Selected IOVA mode 'VA' 00:03:15.445 EAL: VFIO support initialized 00:03:15.445 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:15.445 00:03:15.445 00:03:15.445 CUnit - A unit testing framework for C - Version 2.1-3 00:03:15.445 http://cunit.sourceforge.net/ 00:03:15.445 00:03:15.445 00:03:15.445 Suite: memory 00:03:15.445 Test: test ... 00:03:15.445 register 0x200000200000 2097152 00:03:15.445 malloc 3145728 00:03:15.445 register 0x200000400000 4194304 00:03:15.445 buf 0x200000500000 len 3145728 PASSED 00:03:15.445 malloc 64 00:03:15.445 buf 0x2000004fff40 len 64 PASSED 00:03:15.445 malloc 4194304 00:03:15.445 register 0x200000800000 6291456 00:03:15.445 buf 0x200000a00000 len 4194304 PASSED 00:03:15.445 free 0x200000500000 3145728 00:03:15.445 free 0x2000004fff40 64 00:03:15.446 unregister 0x200000400000 4194304 PASSED 00:03:15.446 free 0x200000a00000 4194304 00:03:15.446 unregister 0x200000800000 6291456 PASSED 00:03:15.446 malloc 8388608 00:03:15.446 register 0x200000400000 10485760 00:03:15.446 buf 0x200000600000 len 8388608 PASSED 00:03:15.446 free 0x200000600000 8388608 00:03:15.446 unregister 0x200000400000 10485760 PASSED 00:03:15.446 passed 00:03:15.446 00:03:15.446 Run Summary: Type Total Ran Passed Failed Inactive 00:03:15.446 suites 1 1 n/a 0 0 00:03:15.446 tests 1 1 1 0 0 00:03:15.446 asserts 15 15 15 0 n/a 00:03:15.446 00:03:15.446 Elapsed time = 0.005 seconds 00:03:15.446 00:03:15.446 real 0m0.069s 00:03:15.446 user 0m0.022s 00:03:15.446 sys 0m0.047s 00:03:15.446 17:27:53 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.446 17:27:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:15.446 ************************************ 00:03:15.446 END TEST env_mem_callbacks 00:03:15.446 ************************************ 00:03:15.446 00:03:15.446 real 0m3.468s 00:03:15.446 user 0m1.311s 00:03:15.446 sys 0m1.306s 00:03:15.446 17:27:53 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.446 17:27:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:15.446 ************************************ 00:03:15.446 END TEST env 00:03:15.446 ************************************ 00:03:15.446 17:27:53 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:15.446 17:27:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.446 17:27:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.446 17:27:53 -- common/autotest_common.sh@10 -- # set +x 00:03:15.705 ************************************ 00:03:15.705 START TEST rpc 00:03:15.705 ************************************ 00:03:15.705 17:27:53 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:03:15.705 * Looking for test storage... 00:03:15.705 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:15.705 17:27:53 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:15.705 17:27:53 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:15.705 17:27:53 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:15.705 17:27:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:15.705 17:27:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:15.705 17:27:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:15.705 17:27:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:15.705 17:27:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:15.705 17:27:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:15.705 17:27:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:15.705 17:27:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:15.705 17:27:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:15.705 17:27:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:15.705 17:27:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:15.705 17:27:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:15.705 17:27:54 rpc -- scripts/common.sh@345 -- # : 1 00:03:15.705 17:27:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:15.705 17:27:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:15.705 17:27:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:15.705 17:27:54 rpc -- scripts/common.sh@353 -- # local d=1 00:03:15.705 17:27:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:15.705 17:27:54 rpc -- scripts/common.sh@355 -- # echo 1 00:03:15.705 17:27:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:15.705 17:27:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:15.705 17:27:54 rpc -- scripts/common.sh@353 -- # local d=2 00:03:15.705 17:27:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:15.705 17:27:54 rpc -- scripts/common.sh@355 -- # echo 2 00:03:15.705 17:27:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:15.705 17:27:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:15.705 17:27:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:15.705 17:27:54 rpc -- scripts/common.sh@368 -- # return 0 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:15.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.705 --rc genhtml_branch_coverage=1 00:03:15.705 --rc genhtml_function_coverage=1 00:03:15.705 --rc genhtml_legend=1 00:03:15.705 --rc geninfo_all_blocks=1 00:03:15.705 --rc geninfo_unexecuted_blocks=1 00:03:15.705 00:03:15.705 ' 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:15.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.705 --rc genhtml_branch_coverage=1 00:03:15.705 --rc genhtml_function_coverage=1 00:03:15.705 --rc genhtml_legend=1 00:03:15.705 --rc geninfo_all_blocks=1 00:03:15.705 --rc geninfo_unexecuted_blocks=1 00:03:15.705 00:03:15.705 ' 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:15.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.705 --rc genhtml_branch_coverage=1 00:03:15.705 --rc genhtml_function_coverage=1 00:03:15.705 --rc genhtml_legend=1 00:03:15.705 --rc geninfo_all_blocks=1 00:03:15.705 --rc geninfo_unexecuted_blocks=1 00:03:15.705 00:03:15.705 ' 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:15.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:15.705 --rc genhtml_branch_coverage=1 00:03:15.705 --rc genhtml_function_coverage=1 00:03:15.705 --rc genhtml_legend=1 00:03:15.705 --rc geninfo_all_blocks=1 00:03:15.705 --rc geninfo_unexecuted_blocks=1 00:03:15.705 00:03:15.705 ' 00:03:15.705 17:27:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:15.705 17:27:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=475349 00:03:15.705 17:27:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:15.705 17:27:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 475349 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@831 -- # '[' -z 475349 ']' 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:15.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:15.705 17:27:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:15.705 [2024-10-17 17:27:54.091112] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:15.705 [2024-10-17 17:27:54.091169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475349 ] 00:03:15.964 [2024-10-17 17:27:54.163869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:15.964 [2024-10-17 17:27:54.204774] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:15.964 [2024-10-17 17:27:54.204820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 475349' to capture a snapshot of events at runtime. 00:03:15.964 [2024-10-17 17:27:54.204830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:15.964 [2024-10-17 17:27:54.204838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:15.964 [2024-10-17 17:27:54.204845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid475349 for offline analysis/debug. 00:03:15.964 [2024-10-17 17:27:54.205287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:16.223 17:27:54 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:16.223 17:27:54 rpc -- common/autotest_common.sh@864 -- # return 0 00:03:16.223 17:27:54 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:16.223 17:27:54 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:16.223 17:27:54 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:16.223 17:27:54 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:16.223 17:27:54 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.223 17:27:54 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.223 17:27:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:16.223 ************************************ 00:03:16.223 START TEST rpc_integrity 00:03:16.223 ************************************ 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:16.223 { 00:03:16.223 "name": "Malloc0", 00:03:16.223 "aliases": [ 00:03:16.223 "945c758b-fdc7-45fe-b738-ad8d5a6c7e0f" 00:03:16.223 ], 00:03:16.223 "product_name": "Malloc disk", 00:03:16.223 "block_size": 512, 00:03:16.223 "num_blocks": 16384, 00:03:16.223 "uuid": "945c758b-fdc7-45fe-b738-ad8d5a6c7e0f", 00:03:16.223 "assigned_rate_limits": { 00:03:16.223 "rw_ios_per_sec": 0, 00:03:16.223 "rw_mbytes_per_sec": 0, 00:03:16.223 "r_mbytes_per_sec": 0, 00:03:16.223 "w_mbytes_per_sec": 0 00:03:16.223 }, 00:03:16.223 "claimed": false, 00:03:16.223 "zoned": false, 00:03:16.223 "supported_io_types": { 00:03:16.223 "read": true, 00:03:16.223 "write": true, 00:03:16.223 "unmap": true, 00:03:16.223 "flush": true, 00:03:16.223 "reset": true, 00:03:16.223 "nvme_admin": false, 00:03:16.223 "nvme_io": false, 00:03:16.223 "nvme_io_md": false, 00:03:16.223 "write_zeroes": true, 00:03:16.223 "zcopy": true, 00:03:16.223 "get_zone_info": false, 00:03:16.223 "zone_management": false, 00:03:16.223 "zone_append": false, 00:03:16.223 "compare": false, 00:03:16.223 "compare_and_write": false, 00:03:16.223 "abort": true, 00:03:16.223 "seek_hole": false, 00:03:16.223 "seek_data": false, 00:03:16.223 "copy": true, 00:03:16.223 "nvme_iov_md": false 00:03:16.223 }, 00:03:16.223 "memory_domains": [ 00:03:16.223 { 00:03:16.223 "dma_device_id": "system", 00:03:16.223 "dma_device_type": 1 00:03:16.223 }, 00:03:16.223 { 00:03:16.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:16.223 "dma_device_type": 2 00:03:16.223 } 00:03:16.223 ], 00:03:16.223 "driver_specific": {} 00:03:16.223 } 00:03:16.223 ]' 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:16.223 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.223 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.223 [2024-10-17 17:27:54.608770] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:16.223 [2024-10-17 17:27:54.608807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:16.223 [2024-10-17 17:27:54.608821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177e550 00:03:16.223 [2024-10-17 17:27:54.608829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:16.223 [2024-10-17 17:27:54.609985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:16.223 [2024-10-17 17:27:54.610011] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:16.223 Passthru0 00:03:16.482 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.482 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:16.482 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.482 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.482 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.482 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:16.482 { 00:03:16.482 "name": "Malloc0", 00:03:16.482 "aliases": [ 00:03:16.482 "945c758b-fdc7-45fe-b738-ad8d5a6c7e0f" 00:03:16.482 ], 00:03:16.482 "product_name": "Malloc disk", 00:03:16.482 "block_size": 512, 00:03:16.482 "num_blocks": 16384, 00:03:16.482 "uuid": "945c758b-fdc7-45fe-b738-ad8d5a6c7e0f", 00:03:16.482 "assigned_rate_limits": { 00:03:16.482 "rw_ios_per_sec": 0, 00:03:16.482 "rw_mbytes_per_sec": 0, 00:03:16.482 "r_mbytes_per_sec": 0, 00:03:16.482 "w_mbytes_per_sec": 0 00:03:16.482 }, 00:03:16.482 "claimed": true, 00:03:16.482 "claim_type": "exclusive_write", 00:03:16.482 "zoned": false, 00:03:16.482 "supported_io_types": { 00:03:16.482 "read": true, 00:03:16.482 "write": true, 00:03:16.482 "unmap": true, 00:03:16.482 "flush": true, 00:03:16.482 "reset": true, 00:03:16.482 "nvme_admin": false, 00:03:16.482 "nvme_io": false, 00:03:16.482 "nvme_io_md": false, 00:03:16.482 "write_zeroes": true, 00:03:16.482 "zcopy": true, 00:03:16.482 "get_zone_info": false, 00:03:16.482 "zone_management": false, 00:03:16.482 "zone_append": false, 00:03:16.482 "compare": false, 00:03:16.482 "compare_and_write": false, 00:03:16.482 "abort": true, 00:03:16.482 "seek_hole": false, 00:03:16.482 "seek_data": false, 00:03:16.482 "copy": true, 00:03:16.482 "nvme_iov_md": false 00:03:16.482 }, 00:03:16.482 "memory_domains": [ 00:03:16.482 { 00:03:16.482 "dma_device_id": "system", 00:03:16.483 "dma_device_type": 1 00:03:16.483 }, 00:03:16.483 { 00:03:16.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:16.483 "dma_device_type": 2 00:03:16.483 } 00:03:16.483 ], 00:03:16.483 "driver_specific": {} 00:03:16.483 }, 00:03:16.483 { 00:03:16.483 "name": "Passthru0", 00:03:16.483 "aliases": [ 00:03:16.483 "53f806af-0a84-5d9c-9d6b-37c16b3bd905" 00:03:16.483 ], 00:03:16.483 "product_name": "passthru", 00:03:16.483 "block_size": 512, 00:03:16.483 "num_blocks": 16384, 00:03:16.483 "uuid": "53f806af-0a84-5d9c-9d6b-37c16b3bd905", 00:03:16.483 "assigned_rate_limits": { 00:03:16.483 "rw_ios_per_sec": 0, 00:03:16.483 "rw_mbytes_per_sec": 0, 00:03:16.483 "r_mbytes_per_sec": 0, 00:03:16.483 "w_mbytes_per_sec": 0 00:03:16.483 }, 00:03:16.483 "claimed": false, 00:03:16.483 "zoned": false, 00:03:16.483 "supported_io_types": { 00:03:16.483 "read": true, 00:03:16.483 "write": true, 00:03:16.483 "unmap": true, 00:03:16.483 "flush": true, 00:03:16.483 "reset": true, 00:03:16.483 "nvme_admin": false, 00:03:16.483 "nvme_io": false, 00:03:16.483 "nvme_io_md": false, 00:03:16.483 "write_zeroes": true, 00:03:16.483 "zcopy": true, 00:03:16.483 "get_zone_info": false, 00:03:16.483 "zone_management": false, 00:03:16.483 "zone_append": false, 00:03:16.483 "compare": false, 00:03:16.483 "compare_and_write": false, 00:03:16.483 "abort": true, 00:03:16.483 "seek_hole": false, 00:03:16.483 "seek_data": false, 00:03:16.483 "copy": true, 00:03:16.483 "nvme_iov_md": false 00:03:16.483 }, 00:03:16.483 "memory_domains": [ 00:03:16.483 { 00:03:16.483 "dma_device_id": "system", 00:03:16.483 "dma_device_type": 1 00:03:16.483 }, 00:03:16.483 { 00:03:16.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:16.483 "dma_device_type": 2 00:03:16.483 } 00:03:16.483 ], 00:03:16.483 "driver_specific": { 00:03:16.483 "passthru": { 00:03:16.483 "name": "Passthru0", 00:03:16.483 "base_bdev_name": "Malloc0" 00:03:16.483 } 00:03:16.483 } 00:03:16.483 } 00:03:16.483 ]' 00:03:16.483 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:16.483 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:16.483 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.483 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.483 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.483 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:16.483 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:16.483 17:27:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:16.483 00:03:16.483 real 0m0.290s 00:03:16.483 user 0m0.174s 00:03:16.483 sys 0m0.051s 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:16.483 17:27:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:16.483 ************************************ 00:03:16.483 END TEST rpc_integrity 00:03:16.483 ************************************ 00:03:16.483 17:27:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:16.483 17:27:54 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.483 17:27:54 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.483 17:27:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:16.483 ************************************ 00:03:16.483 START TEST rpc_plugins 00:03:16.483 ************************************ 00:03:16.483 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:03:16.483 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:16.483 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.483 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:16.483 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.483 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:16.483 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:16.483 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.483 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:16.741 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.741 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:16.741 { 00:03:16.741 "name": "Malloc1", 00:03:16.741 "aliases": [ 00:03:16.741 "d45cf7a9-621b-46c5-a6e2-5e2828769e74" 00:03:16.741 ], 00:03:16.741 "product_name": "Malloc disk", 00:03:16.741 "block_size": 4096, 00:03:16.741 "num_blocks": 256, 00:03:16.741 "uuid": "d45cf7a9-621b-46c5-a6e2-5e2828769e74", 00:03:16.741 "assigned_rate_limits": { 00:03:16.741 "rw_ios_per_sec": 0, 00:03:16.741 "rw_mbytes_per_sec": 0, 00:03:16.741 "r_mbytes_per_sec": 0, 00:03:16.741 "w_mbytes_per_sec": 0 00:03:16.741 }, 00:03:16.741 "claimed": false, 00:03:16.741 "zoned": false, 00:03:16.741 "supported_io_types": { 00:03:16.741 "read": true, 00:03:16.741 "write": true, 00:03:16.741 "unmap": true, 00:03:16.741 "flush": true, 00:03:16.741 "reset": true, 00:03:16.741 "nvme_admin": false, 00:03:16.742 "nvme_io": false, 00:03:16.742 "nvme_io_md": false, 00:03:16.742 "write_zeroes": true, 00:03:16.742 "zcopy": true, 00:03:16.742 "get_zone_info": false, 00:03:16.742 "zone_management": false, 00:03:16.742 "zone_append": false, 00:03:16.742 "compare": false, 00:03:16.742 "compare_and_write": false, 00:03:16.742 "abort": true, 00:03:16.742 "seek_hole": false, 00:03:16.742 "seek_data": false, 00:03:16.742 "copy": true, 00:03:16.742 "nvme_iov_md": false 00:03:16.742 }, 00:03:16.742 "memory_domains": [ 00:03:16.742 { 00:03:16.742 "dma_device_id": "system", 00:03:16.742 "dma_device_type": 1 00:03:16.742 }, 00:03:16.742 { 00:03:16.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:16.742 "dma_device_type": 2 00:03:16.742 } 00:03:16.742 ], 00:03:16.742 "driver_specific": {} 00:03:16.742 } 00:03:16.742 ]' 00:03:16.742 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:16.742 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:16.742 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:16.742 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.742 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:16.742 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.742 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:16.742 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.742 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:16.742 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.742 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:16.742 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:16.742 17:27:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:16.742 00:03:16.742 real 0m0.144s 00:03:16.742 user 0m0.093s 00:03:16.742 sys 0m0.019s 00:03:16.742 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:16.742 17:27:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:16.742 ************************************ 00:03:16.742 END TEST rpc_plugins 00:03:16.742 ************************************ 00:03:16.742 17:27:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:16.742 17:27:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:16.742 17:27:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:16.742 17:27:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:16.742 ************************************ 00:03:16.742 START TEST rpc_trace_cmd_test 00:03:16.742 ************************************ 00:03:16.742 17:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:03:16.742 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:16.742 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:16.742 17:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:16.742 17:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:16.742 17:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:16.742 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:16.742 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid475349", 00:03:16.742 "tpoint_group_mask": "0x8", 00:03:16.742 "iscsi_conn": { 00:03:16.742 "mask": "0x2", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "scsi": { 00:03:16.742 "mask": "0x4", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "bdev": { 00:03:16.742 "mask": "0x8", 00:03:16.742 "tpoint_mask": "0xffffffffffffffff" 00:03:16.742 }, 00:03:16.742 "nvmf_rdma": { 00:03:16.742 "mask": "0x10", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "nvmf_tcp": { 00:03:16.742 "mask": "0x20", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "ftl": { 00:03:16.742 "mask": "0x40", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "blobfs": { 00:03:16.742 "mask": "0x80", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "dsa": { 00:03:16.742 "mask": "0x200", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "thread": { 00:03:16.742 "mask": "0x400", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "nvme_pcie": { 00:03:16.742 "mask": "0x800", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "iaa": { 00:03:16.742 "mask": "0x1000", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "nvme_tcp": { 00:03:16.742 "mask": "0x2000", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "bdev_nvme": { 00:03:16.742 "mask": "0x4000", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "sock": { 00:03:16.742 "mask": "0x8000", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "blob": { 00:03:16.742 "mask": "0x10000", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "bdev_raid": { 00:03:16.742 "mask": "0x20000", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 }, 00:03:16.742 "scheduler": { 00:03:16.742 "mask": "0x40000", 00:03:16.742 "tpoint_mask": "0x0" 00:03:16.742 } 00:03:16.742 }' 00:03:16.742 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:16.742 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:17.000 00:03:17.000 real 0m0.242s 00:03:17.000 user 0m0.199s 00:03:17.000 sys 0m0.034s 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:17.000 17:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:17.000 ************************************ 00:03:17.000 END TEST rpc_trace_cmd_test 00:03:17.000 ************************************ 00:03:17.000 17:27:55 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:17.000 17:27:55 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:17.000 17:27:55 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:17.000 17:27:55 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:17.000 17:27:55 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:17.000 17:27:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:17.000 ************************************ 00:03:17.000 START TEST rpc_daemon_integrity 00:03:17.000 ************************************ 00:03:17.000 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:03:17.000 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:17.000 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:17.000 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.000 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:17.000 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:17.259 { 00:03:17.259 "name": "Malloc2", 00:03:17.259 "aliases": [ 00:03:17.259 "fa030679-8f77-4cc1-bffe-5f0a6990dc19" 00:03:17.259 ], 00:03:17.259 "product_name": "Malloc disk", 00:03:17.259 "block_size": 512, 00:03:17.259 "num_blocks": 16384, 00:03:17.259 "uuid": "fa030679-8f77-4cc1-bffe-5f0a6990dc19", 00:03:17.259 "assigned_rate_limits": { 00:03:17.259 "rw_ios_per_sec": 0, 00:03:17.259 "rw_mbytes_per_sec": 0, 00:03:17.259 "r_mbytes_per_sec": 0, 00:03:17.259 "w_mbytes_per_sec": 0 00:03:17.259 }, 00:03:17.259 "claimed": false, 00:03:17.259 "zoned": false, 00:03:17.259 "supported_io_types": { 00:03:17.259 "read": true, 00:03:17.259 "write": true, 00:03:17.259 "unmap": true, 00:03:17.259 "flush": true, 00:03:17.259 "reset": true, 00:03:17.259 "nvme_admin": false, 00:03:17.259 "nvme_io": false, 00:03:17.259 "nvme_io_md": false, 00:03:17.259 "write_zeroes": true, 00:03:17.259 "zcopy": true, 00:03:17.259 "get_zone_info": false, 00:03:17.259 "zone_management": false, 00:03:17.259 "zone_append": false, 00:03:17.259 "compare": false, 00:03:17.259 "compare_and_write": false, 00:03:17.259 "abort": true, 00:03:17.259 "seek_hole": false, 00:03:17.259 "seek_data": false, 00:03:17.259 "copy": true, 00:03:17.259 "nvme_iov_md": false 00:03:17.259 }, 00:03:17.259 "memory_domains": [ 00:03:17.259 { 00:03:17.259 "dma_device_id": "system", 00:03:17.259 "dma_device_type": 1 00:03:17.259 }, 00:03:17.259 { 00:03:17.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.259 "dma_device_type": 2 00:03:17.259 } 00:03:17.259 ], 00:03:17.259 "driver_specific": {} 00:03:17.259 } 00:03:17.259 ]' 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.259 [2024-10-17 17:27:55.511186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:17.259 [2024-10-17 17:27:55.511221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:17.259 [2024-10-17 17:27:55.511237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177eee0 00:03:17.259 [2024-10-17 17:27:55.511246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:17.259 [2024-10-17 17:27:55.512375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:17.259 [2024-10-17 17:27:55.512401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:17.259 Passthru0 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:17.259 { 00:03:17.259 "name": "Malloc2", 00:03:17.259 "aliases": [ 00:03:17.259 "fa030679-8f77-4cc1-bffe-5f0a6990dc19" 00:03:17.259 ], 00:03:17.259 "product_name": "Malloc disk", 00:03:17.259 "block_size": 512, 00:03:17.259 "num_blocks": 16384, 00:03:17.259 "uuid": "fa030679-8f77-4cc1-bffe-5f0a6990dc19", 00:03:17.259 "assigned_rate_limits": { 00:03:17.259 "rw_ios_per_sec": 0, 00:03:17.259 "rw_mbytes_per_sec": 0, 00:03:17.259 "r_mbytes_per_sec": 0, 00:03:17.259 "w_mbytes_per_sec": 0 00:03:17.259 }, 00:03:17.259 "claimed": true, 00:03:17.259 "claim_type": "exclusive_write", 00:03:17.259 "zoned": false, 00:03:17.259 "supported_io_types": { 00:03:17.259 "read": true, 00:03:17.259 "write": true, 00:03:17.259 "unmap": true, 00:03:17.259 "flush": true, 00:03:17.259 "reset": true, 00:03:17.259 "nvme_admin": false, 00:03:17.259 "nvme_io": false, 00:03:17.259 "nvme_io_md": false, 00:03:17.259 "write_zeroes": true, 00:03:17.259 "zcopy": true, 00:03:17.259 "get_zone_info": false, 00:03:17.259 "zone_management": false, 00:03:17.259 "zone_append": false, 00:03:17.259 "compare": false, 00:03:17.259 "compare_and_write": false, 00:03:17.259 "abort": true, 00:03:17.259 "seek_hole": false, 00:03:17.259 "seek_data": false, 00:03:17.259 "copy": true, 00:03:17.259 "nvme_iov_md": false 00:03:17.259 }, 00:03:17.259 "memory_domains": [ 00:03:17.259 { 00:03:17.259 "dma_device_id": "system", 00:03:17.259 "dma_device_type": 1 00:03:17.259 }, 00:03:17.259 { 00:03:17.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.259 "dma_device_type": 2 00:03:17.259 } 00:03:17.259 ], 00:03:17.259 "driver_specific": {} 00:03:17.259 }, 00:03:17.259 { 00:03:17.259 "name": "Passthru0", 00:03:17.259 "aliases": [ 00:03:17.259 "5eda3a99-e832-5f59-a66c-c54d1f976e4b" 00:03:17.259 ], 00:03:17.259 "product_name": "passthru", 00:03:17.259 "block_size": 512, 00:03:17.259 "num_blocks": 16384, 00:03:17.259 "uuid": "5eda3a99-e832-5f59-a66c-c54d1f976e4b", 00:03:17.259 "assigned_rate_limits": { 00:03:17.259 "rw_ios_per_sec": 0, 00:03:17.259 "rw_mbytes_per_sec": 0, 00:03:17.259 "r_mbytes_per_sec": 0, 00:03:17.259 "w_mbytes_per_sec": 0 00:03:17.259 }, 00:03:17.259 "claimed": false, 00:03:17.259 "zoned": false, 00:03:17.259 "supported_io_types": { 00:03:17.259 "read": true, 00:03:17.259 "write": true, 00:03:17.259 "unmap": true, 00:03:17.259 "flush": true, 00:03:17.259 "reset": true, 00:03:17.259 "nvme_admin": false, 00:03:17.259 "nvme_io": false, 00:03:17.259 "nvme_io_md": false, 00:03:17.259 "write_zeroes": true, 00:03:17.259 "zcopy": true, 00:03:17.259 "get_zone_info": false, 00:03:17.259 "zone_management": false, 00:03:17.259 "zone_append": false, 00:03:17.259 "compare": false, 00:03:17.259 "compare_and_write": false, 00:03:17.259 "abort": true, 00:03:17.259 "seek_hole": false, 00:03:17.259 "seek_data": false, 00:03:17.259 "copy": true, 00:03:17.259 "nvme_iov_md": false 00:03:17.259 }, 00:03:17.259 "memory_domains": [ 00:03:17.259 { 00:03:17.259 "dma_device_id": "system", 00:03:17.259 "dma_device_type": 1 00:03:17.259 }, 00:03:17.259 { 00:03:17.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:17.259 "dma_device_type": 2 00:03:17.259 } 00:03:17.259 ], 00:03:17.259 "driver_specific": { 00:03:17.259 "passthru": { 00:03:17.259 "name": "Passthru0", 00:03:17.259 "base_bdev_name": "Malloc2" 00:03:17.259 } 00:03:17.259 } 00:03:17.259 } 00:03:17.259 ]' 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:17.259 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.260 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:17.260 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:17.260 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:17.260 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.260 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:17.260 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:17.260 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:17.518 17:27:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:17.518 00:03:17.518 real 0m0.283s 00:03:17.518 user 0m0.163s 00:03:17.518 sys 0m0.063s 00:03:17.518 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:17.518 17:27:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:17.518 ************************************ 00:03:17.518 END TEST rpc_daemon_integrity 00:03:17.518 ************************************ 00:03:17.518 17:27:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:17.518 17:27:55 rpc -- rpc/rpc.sh@84 -- # killprocess 475349 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@950 -- # '[' -z 475349 ']' 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@954 -- # kill -0 475349 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@955 -- # uname 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475349 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475349' 00:03:17.518 killing process with pid 475349 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@969 -- # kill 475349 00:03:17.518 17:27:55 rpc -- common/autotest_common.sh@974 -- # wait 475349 00:03:17.776 00:03:17.776 real 0m2.227s 00:03:17.776 user 0m2.760s 00:03:17.776 sys 0m0.833s 00:03:17.776 17:27:56 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:17.776 17:27:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:17.776 ************************************ 00:03:17.776 END TEST rpc 00:03:17.776 ************************************ 00:03:17.776 17:27:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:17.776 17:27:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:17.776 17:27:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:17.776 17:27:56 -- common/autotest_common.sh@10 -- # set +x 00:03:17.776 ************************************ 00:03:17.776 START TEST skip_rpc 00:03:17.776 ************************************ 00:03:17.776 17:27:56 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:18.034 * Looking for test storage... 00:03:18.034 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:18.034 17:27:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:18.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.034 --rc genhtml_branch_coverage=1 00:03:18.034 --rc genhtml_function_coverage=1 00:03:18.034 --rc genhtml_legend=1 00:03:18.034 --rc geninfo_all_blocks=1 00:03:18.034 --rc geninfo_unexecuted_blocks=1 00:03:18.034 00:03:18.034 ' 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:18.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.034 --rc genhtml_branch_coverage=1 00:03:18.034 --rc genhtml_function_coverage=1 00:03:18.034 --rc genhtml_legend=1 00:03:18.034 --rc geninfo_all_blocks=1 00:03:18.034 --rc geninfo_unexecuted_blocks=1 00:03:18.034 00:03:18.034 ' 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:18.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.034 --rc genhtml_branch_coverage=1 00:03:18.034 --rc genhtml_function_coverage=1 00:03:18.034 --rc genhtml_legend=1 00:03:18.034 --rc geninfo_all_blocks=1 00:03:18.034 --rc geninfo_unexecuted_blocks=1 00:03:18.034 00:03:18.034 ' 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:18.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:18.034 --rc genhtml_branch_coverage=1 00:03:18.034 --rc genhtml_function_coverage=1 00:03:18.034 --rc genhtml_legend=1 00:03:18.034 --rc geninfo_all_blocks=1 00:03:18.034 --rc geninfo_unexecuted_blocks=1 00:03:18.034 00:03:18.034 ' 00:03:18.034 17:27:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:18.034 17:27:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:18.034 17:27:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:18.034 17:27:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:18.034 ************************************ 00:03:18.034 START TEST skip_rpc 00:03:18.034 ************************************ 00:03:18.034 17:27:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:03:18.034 17:27:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=475876 00:03:18.034 17:27:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:18.034 17:27:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:18.034 17:27:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:18.293 [2024-10-17 17:27:56.445291] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:18.293 [2024-10-17 17:27:56.445334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid475876 ] 00:03:18.293 [2024-10-17 17:27:56.515533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:18.293 [2024-10-17 17:27:56.562830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 475876 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 475876 ']' 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 475876 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 475876 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 475876' 00:03:23.557 killing process with pid 475876 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 475876 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 475876 00:03:23.557 00:03:23.557 real 0m5.411s 00:03:23.557 user 0m5.144s 00:03:23.557 sys 0m0.311s 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:23.557 17:28:01 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.557 ************************************ 00:03:23.557 END TEST skip_rpc 00:03:23.557 ************************************ 00:03:23.557 17:28:01 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:23.557 17:28:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:23.557 17:28:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:23.557 17:28:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.557 ************************************ 00:03:23.557 START TEST skip_rpc_with_json 00:03:23.557 ************************************ 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=476646 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 476646 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 476646 ']' 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:23.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:23.557 17:28:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:23.816 [2024-10-17 17:28:01.956340] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:23.816 [2024-10-17 17:28:01.956394] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid476646 ] 00:03:23.816 [2024-10-17 17:28:02.029126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.816 [2024-10-17 17:28:02.073491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.074 [2024-10-17 17:28:02.295387] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:24.074 request: 00:03:24.074 { 00:03:24.074 "trtype": "tcp", 00:03:24.074 "method": "nvmf_get_transports", 00:03:24.074 "req_id": 1 00:03:24.074 } 00:03:24.074 Got JSON-RPC error response 00:03:24.074 response: 00:03:24.074 { 00:03:24.074 "code": -19, 00:03:24.074 "message": "No such device" 00:03:24.074 } 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.074 [2024-10-17 17:28:02.303484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:24.074 17:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:24.074 { 00:03:24.074 "subsystems": [ 00:03:24.074 { 00:03:24.074 "subsystem": "fsdev", 00:03:24.074 "config": [ 00:03:24.074 { 00:03:24.074 "method": "fsdev_set_opts", 00:03:24.074 "params": { 00:03:24.074 "fsdev_io_pool_size": 65535, 00:03:24.074 "fsdev_io_cache_size": 256 00:03:24.074 } 00:03:24.074 } 00:03:24.074 ] 00:03:24.074 }, 00:03:24.074 { 00:03:24.074 "subsystem": "keyring", 00:03:24.074 "config": [] 00:03:24.074 }, 00:03:24.074 { 00:03:24.074 "subsystem": "iobuf", 00:03:24.074 "config": [ 00:03:24.074 { 00:03:24.074 "method": "iobuf_set_options", 00:03:24.074 "params": { 00:03:24.074 "small_pool_count": 8192, 00:03:24.074 "large_pool_count": 1024, 00:03:24.074 "small_bufsize": 8192, 00:03:24.074 "large_bufsize": 135168 00:03:24.074 } 00:03:24.074 } 00:03:24.074 ] 00:03:24.074 }, 00:03:24.074 { 00:03:24.074 "subsystem": "sock", 00:03:24.074 "config": [ 00:03:24.074 { 00:03:24.074 "method": "sock_set_default_impl", 00:03:24.074 "params": { 00:03:24.074 "impl_name": "posix" 00:03:24.074 } 00:03:24.074 }, 00:03:24.074 { 00:03:24.074 "method": "sock_impl_set_options", 00:03:24.074 "params": { 00:03:24.074 "impl_name": "ssl", 00:03:24.074 "recv_buf_size": 4096, 00:03:24.074 "send_buf_size": 4096, 00:03:24.074 "enable_recv_pipe": true, 00:03:24.074 "enable_quickack": false, 00:03:24.074 "enable_placement_id": 0, 00:03:24.074 "enable_zerocopy_send_server": true, 00:03:24.074 "enable_zerocopy_send_client": false, 00:03:24.074 "zerocopy_threshold": 0, 00:03:24.074 "tls_version": 0, 00:03:24.074 "enable_ktls": false 00:03:24.074 } 00:03:24.074 }, 00:03:24.074 { 00:03:24.074 "method": "sock_impl_set_options", 00:03:24.074 "params": { 00:03:24.074 "impl_name": "posix", 00:03:24.074 "recv_buf_size": 2097152, 00:03:24.075 "send_buf_size": 2097152, 00:03:24.075 "enable_recv_pipe": true, 00:03:24.075 "enable_quickack": false, 00:03:24.075 "enable_placement_id": 0, 00:03:24.075 "enable_zerocopy_send_server": true, 00:03:24.075 "enable_zerocopy_send_client": false, 00:03:24.075 "zerocopy_threshold": 0, 00:03:24.075 "tls_version": 0, 00:03:24.075 "enable_ktls": false 00:03:24.075 } 00:03:24.075 } 00:03:24.075 ] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "vmd", 00:03:24.075 "config": [] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "accel", 00:03:24.075 "config": [ 00:03:24.075 { 00:03:24.075 "method": "accel_set_options", 00:03:24.075 "params": { 00:03:24.075 "small_cache_size": 128, 00:03:24.075 "large_cache_size": 16, 00:03:24.075 "task_count": 2048, 00:03:24.075 "sequence_count": 2048, 00:03:24.075 "buf_count": 2048 00:03:24.075 } 00:03:24.075 } 00:03:24.075 ] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "bdev", 00:03:24.075 "config": [ 00:03:24.075 { 00:03:24.075 "method": "bdev_set_options", 00:03:24.075 "params": { 00:03:24.075 "bdev_io_pool_size": 65535, 00:03:24.075 "bdev_io_cache_size": 256, 00:03:24.075 "bdev_auto_examine": true, 00:03:24.075 "iobuf_small_cache_size": 128, 00:03:24.075 "iobuf_large_cache_size": 16 00:03:24.075 } 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "method": "bdev_raid_set_options", 00:03:24.075 "params": { 00:03:24.075 "process_window_size_kb": 1024, 00:03:24.075 "process_max_bandwidth_mb_sec": 0 00:03:24.075 } 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "method": "bdev_iscsi_set_options", 00:03:24.075 "params": { 00:03:24.075 "timeout_sec": 30 00:03:24.075 } 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "method": "bdev_nvme_set_options", 00:03:24.075 "params": { 00:03:24.075 "action_on_timeout": "none", 00:03:24.075 "timeout_us": 0, 00:03:24.075 "timeout_admin_us": 0, 00:03:24.075 "keep_alive_timeout_ms": 10000, 00:03:24.075 "arbitration_burst": 0, 00:03:24.075 "low_priority_weight": 0, 00:03:24.075 "medium_priority_weight": 0, 00:03:24.075 "high_priority_weight": 0, 00:03:24.075 "nvme_adminq_poll_period_us": 10000, 00:03:24.075 "nvme_ioq_poll_period_us": 0, 00:03:24.075 "io_queue_requests": 0, 00:03:24.075 "delay_cmd_submit": true, 00:03:24.075 "transport_retry_count": 4, 00:03:24.075 "bdev_retry_count": 3, 00:03:24.075 "transport_ack_timeout": 0, 00:03:24.075 "ctrlr_loss_timeout_sec": 0, 00:03:24.075 "reconnect_delay_sec": 0, 00:03:24.075 "fast_io_fail_timeout_sec": 0, 00:03:24.075 "disable_auto_failback": false, 00:03:24.075 "generate_uuids": false, 00:03:24.075 "transport_tos": 0, 00:03:24.075 "nvme_error_stat": false, 00:03:24.075 "rdma_srq_size": 0, 00:03:24.075 "io_path_stat": false, 00:03:24.075 "allow_accel_sequence": false, 00:03:24.075 "rdma_max_cq_size": 0, 00:03:24.075 "rdma_cm_event_timeout_ms": 0, 00:03:24.075 "dhchap_digests": [ 00:03:24.075 "sha256", 00:03:24.075 "sha384", 00:03:24.075 "sha512" 00:03:24.075 ], 00:03:24.075 "dhchap_dhgroups": [ 00:03:24.075 "null", 00:03:24.075 "ffdhe2048", 00:03:24.075 "ffdhe3072", 00:03:24.075 "ffdhe4096", 00:03:24.075 "ffdhe6144", 00:03:24.075 "ffdhe8192" 00:03:24.075 ] 00:03:24.075 } 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "method": "bdev_nvme_set_hotplug", 00:03:24.075 "params": { 00:03:24.075 "period_us": 100000, 00:03:24.075 "enable": false 00:03:24.075 } 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "method": "bdev_wait_for_examine" 00:03:24.075 } 00:03:24.075 ] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "scsi", 00:03:24.075 "config": null 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "scheduler", 00:03:24.075 "config": [ 00:03:24.075 { 00:03:24.075 "method": "framework_set_scheduler", 00:03:24.075 "params": { 00:03:24.075 "name": "static" 00:03:24.075 } 00:03:24.075 } 00:03:24.075 ] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "vhost_scsi", 00:03:24.075 "config": [] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "vhost_blk", 00:03:24.075 "config": [] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "ublk", 00:03:24.075 "config": [] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "nbd", 00:03:24.075 "config": [] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "nvmf", 00:03:24.075 "config": [ 00:03:24.075 { 00:03:24.075 "method": "nvmf_set_config", 00:03:24.075 "params": { 00:03:24.075 "discovery_filter": "match_any", 00:03:24.075 "admin_cmd_passthru": { 00:03:24.075 "identify_ctrlr": false 00:03:24.075 }, 00:03:24.075 "dhchap_digests": [ 00:03:24.075 "sha256", 00:03:24.075 "sha384", 00:03:24.075 "sha512" 00:03:24.075 ], 00:03:24.075 "dhchap_dhgroups": [ 00:03:24.075 "null", 00:03:24.075 "ffdhe2048", 00:03:24.075 "ffdhe3072", 00:03:24.075 "ffdhe4096", 00:03:24.075 "ffdhe6144", 00:03:24.075 "ffdhe8192" 00:03:24.075 ] 00:03:24.075 } 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "method": "nvmf_set_max_subsystems", 00:03:24.075 "params": { 00:03:24.075 "max_subsystems": 1024 00:03:24.075 } 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "method": "nvmf_set_crdt", 00:03:24.075 "params": { 00:03:24.075 "crdt1": 0, 00:03:24.075 "crdt2": 0, 00:03:24.075 "crdt3": 0 00:03:24.075 } 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "method": "nvmf_create_transport", 00:03:24.075 "params": { 00:03:24.075 "trtype": "TCP", 00:03:24.075 "max_queue_depth": 128, 00:03:24.075 "max_io_qpairs_per_ctrlr": 127, 00:03:24.075 "in_capsule_data_size": 4096, 00:03:24.075 "max_io_size": 131072, 00:03:24.075 "io_unit_size": 131072, 00:03:24.075 "max_aq_depth": 128, 00:03:24.075 "num_shared_buffers": 511, 00:03:24.075 "buf_cache_size": 4294967295, 00:03:24.075 "dif_insert_or_strip": false, 00:03:24.075 "zcopy": false, 00:03:24.075 "c2h_success": true, 00:03:24.075 "sock_priority": 0, 00:03:24.075 "abort_timeout_sec": 1, 00:03:24.075 "ack_timeout": 0, 00:03:24.075 "data_wr_pool_size": 0 00:03:24.075 } 00:03:24.075 } 00:03:24.075 ] 00:03:24.075 }, 00:03:24.075 { 00:03:24.075 "subsystem": "iscsi", 00:03:24.075 "config": [ 00:03:24.075 { 00:03:24.075 "method": "iscsi_set_options", 00:03:24.075 "params": { 00:03:24.075 "node_base": "iqn.2016-06.io.spdk", 00:03:24.075 "max_sessions": 128, 00:03:24.075 "max_connections_per_session": 2, 00:03:24.075 "max_queue_depth": 64, 00:03:24.075 "default_time2wait": 2, 00:03:24.075 "default_time2retain": 20, 00:03:24.075 "first_burst_length": 8192, 00:03:24.075 "immediate_data": true, 00:03:24.075 "allow_duplicated_isid": false, 00:03:24.075 "error_recovery_level": 0, 00:03:24.075 "nop_timeout": 60, 00:03:24.075 "nop_in_interval": 30, 00:03:24.075 "disable_chap": false, 00:03:24.075 "require_chap": false, 00:03:24.075 "mutual_chap": false, 00:03:24.075 "chap_group": 0, 00:03:24.075 "max_large_datain_per_connection": 64, 00:03:24.075 "max_r2t_per_connection": 4, 00:03:24.075 "pdu_pool_size": 36864, 00:03:24.075 "immediate_data_pool_size": 16384, 00:03:24.075 "data_out_pool_size": 2048 00:03:24.075 } 00:03:24.075 } 00:03:24.075 ] 00:03:24.075 } 00:03:24.075 ] 00:03:24.075 } 00:03:24.075 17:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:24.075 17:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 476646 00:03:24.075 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 476646 ']' 00:03:24.075 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 476646 00:03:24.075 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:24.334 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:24.334 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 476646 00:03:24.334 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:24.334 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:24.334 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 476646' 00:03:24.334 killing process with pid 476646 00:03:24.334 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 476646 00:03:24.334 17:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 476646 00:03:24.592 17:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=476670 00:03:24.592 17:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:24.592 17:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 476670 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 476670 ']' 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 476670 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 476670 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 476670' 00:03:29.853 killing process with pid 476670 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 476670 00:03:29.853 17:28:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 476670 00:03:29.853 17:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:29.853 17:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:03:29.853 00:03:29.853 real 0m6.343s 00:03:29.853 user 0m5.971s 00:03:29.853 sys 0m0.669s 00:03:29.853 17:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:29.853 17:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:29.853 ************************************ 00:03:29.853 END TEST skip_rpc_with_json 00:03:29.853 ************************************ 00:03:30.111 17:28:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:30.111 17:28:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.111 17:28:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.111 17:28:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.111 ************************************ 00:03:30.111 START TEST skip_rpc_with_delay 00:03:30.111 ************************************ 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:30.112 [2024-10-17 17:28:08.363359] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:30.112 00:03:30.112 real 0m0.053s 00:03:30.112 user 0m0.030s 00:03:30.112 sys 0m0.023s 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.112 17:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:30.112 ************************************ 00:03:30.112 END TEST skip_rpc_with_delay 00:03:30.112 ************************************ 00:03:30.112 17:28:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:30.112 17:28:08 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:30.112 17:28:08 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:30.112 17:28:08 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.112 17:28:08 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.112 17:28:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.112 ************************************ 00:03:30.112 START TEST exit_on_failed_rpc_init 00:03:30.112 ************************************ 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=477459 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 477459 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 477459 ']' 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:30.112 17:28:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:30.370 [2024-10-17 17:28:08.508868] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:30.370 [2024-10-17 17:28:08.508915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477459 ] 00:03:30.370 [2024-10-17 17:28:08.583210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.370 [2024-10-17 17:28:08.625990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:30.628 17:28:08 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:30.628 [2024-10-17 17:28:08.909380] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:30.628 [2024-10-17 17:28:08.909442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477626 ] 00:03:30.628 [2024-10-17 17:28:08.979886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.886 [2024-10-17 17:28:09.025058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:30.886 [2024-10-17 17:28:09.025119] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:30.886 [2024-10-17 17:28:09.025130] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:30.886 [2024-10-17 17:28:09.025138] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 477459 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 477459 ']' 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 477459 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 477459 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 477459' 00:03:30.886 killing process with pid 477459 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 477459 00:03:30.886 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 477459 00:03:31.144 00:03:31.144 real 0m1.006s 00:03:31.144 user 0m1.034s 00:03:31.144 sys 0m0.432s 00:03:31.144 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.144 17:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:31.144 ************************************ 00:03:31.144 END TEST exit_on_failed_rpc_init 00:03:31.144 ************************************ 00:03:31.144 17:28:09 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:03:31.144 00:03:31.144 real 0m13.340s 00:03:31.144 user 0m12.389s 00:03:31.144 sys 0m1.790s 00:03:31.144 17:28:09 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.144 17:28:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:31.144 ************************************ 00:03:31.144 END TEST skip_rpc 00:03:31.144 ************************************ 00:03:31.402 17:28:09 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:31.402 17:28:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.402 17:28:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.402 17:28:09 -- common/autotest_common.sh@10 -- # set +x 00:03:31.402 ************************************ 00:03:31.402 START TEST rpc_client 00:03:31.402 ************************************ 00:03:31.402 17:28:09 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:31.402 * Looking for test storage... 00:03:31.402 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:03:31.402 17:28:09 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:31.402 17:28:09 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:03:31.403 17:28:09 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:31.403 17:28:09 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.403 17:28:09 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:31.403 17:28:09 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.403 17:28:09 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:31.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.403 --rc genhtml_branch_coverage=1 00:03:31.403 --rc genhtml_function_coverage=1 00:03:31.403 --rc genhtml_legend=1 00:03:31.403 --rc geninfo_all_blocks=1 00:03:31.403 --rc geninfo_unexecuted_blocks=1 00:03:31.403 00:03:31.403 ' 00:03:31.403 17:28:09 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:31.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.403 --rc genhtml_branch_coverage=1 00:03:31.403 --rc genhtml_function_coverage=1 00:03:31.403 --rc genhtml_legend=1 00:03:31.403 --rc geninfo_all_blocks=1 00:03:31.403 --rc geninfo_unexecuted_blocks=1 00:03:31.403 00:03:31.403 ' 00:03:31.403 17:28:09 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:31.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.403 --rc genhtml_branch_coverage=1 00:03:31.403 --rc genhtml_function_coverage=1 00:03:31.403 --rc genhtml_legend=1 00:03:31.403 --rc geninfo_all_blocks=1 00:03:31.403 --rc geninfo_unexecuted_blocks=1 00:03:31.403 00:03:31.403 ' 00:03:31.403 17:28:09 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:31.403 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.403 --rc genhtml_branch_coverage=1 00:03:31.403 --rc genhtml_function_coverage=1 00:03:31.403 --rc genhtml_legend=1 00:03:31.403 --rc geninfo_all_blocks=1 00:03:31.403 --rc geninfo_unexecuted_blocks=1 00:03:31.403 00:03:31.403 ' 00:03:31.403 17:28:09 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:31.403 OK 00:03:31.661 17:28:09 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:31.661 00:03:31.661 real 0m0.211s 00:03:31.661 user 0m0.104s 00:03:31.661 sys 0m0.125s 00:03:31.661 17:28:09 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:31.661 17:28:09 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:31.661 ************************************ 00:03:31.661 END TEST rpc_client 00:03:31.661 ************************************ 00:03:31.661 17:28:09 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:03:31.661 17:28:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.661 17:28:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.661 17:28:09 -- common/autotest_common.sh@10 -- # set +x 00:03:31.661 ************************************ 00:03:31.661 START TEST json_config 00:03:31.661 ************************************ 00:03:31.661 17:28:09 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:03:31.661 17:28:09 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:31.661 17:28:09 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:03:31.661 17:28:09 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:31.661 17:28:10 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:31.661 17:28:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.661 17:28:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.661 17:28:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.661 17:28:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.661 17:28:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.661 17:28:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.661 17:28:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.661 17:28:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.661 17:28:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.661 17:28:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.661 17:28:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.661 17:28:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:31.661 17:28:10 json_config -- scripts/common.sh@345 -- # : 1 00:03:31.661 17:28:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.661 17:28:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.661 17:28:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:31.661 17:28:10 json_config -- scripts/common.sh@353 -- # local d=1 00:03:31.661 17:28:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.661 17:28:10 json_config -- scripts/common.sh@355 -- # echo 1 00:03:31.661 17:28:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.661 17:28:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:31.661 17:28:10 json_config -- scripts/common.sh@353 -- # local d=2 00:03:31.661 17:28:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.661 17:28:10 json_config -- scripts/common.sh@355 -- # echo 2 00:03:31.661 17:28:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.661 17:28:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.661 17:28:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.661 17:28:10 json_config -- scripts/common.sh@368 -- # return 0 00:03:31.661 17:28:10 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.661 17:28:10 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:31.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.661 --rc genhtml_branch_coverage=1 00:03:31.661 --rc genhtml_function_coverage=1 00:03:31.661 --rc genhtml_legend=1 00:03:31.662 --rc geninfo_all_blocks=1 00:03:31.662 --rc geninfo_unexecuted_blocks=1 00:03:31.662 00:03:31.662 ' 00:03:31.662 17:28:10 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:31.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.662 --rc genhtml_branch_coverage=1 00:03:31.662 --rc genhtml_function_coverage=1 00:03:31.662 --rc genhtml_legend=1 00:03:31.662 --rc geninfo_all_blocks=1 00:03:31.662 --rc geninfo_unexecuted_blocks=1 00:03:31.662 00:03:31.662 ' 00:03:31.662 17:28:10 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:31.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.662 --rc genhtml_branch_coverage=1 00:03:31.662 --rc genhtml_function_coverage=1 00:03:31.662 --rc genhtml_legend=1 00:03:31.662 --rc geninfo_all_blocks=1 00:03:31.662 --rc geninfo_unexecuted_blocks=1 00:03:31.662 00:03:31.662 ' 00:03:31.662 17:28:10 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:31.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.662 --rc genhtml_branch_coverage=1 00:03:31.662 --rc genhtml_function_coverage=1 00:03:31.662 --rc genhtml_legend=1 00:03:31.662 --rc geninfo_all_blocks=1 00:03:31.662 --rc geninfo_unexecuted_blocks=1 00:03:31.662 00:03:31.662 ' 00:03:31.662 17:28:10 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:31.662 17:28:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:31.662 17:28:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.662 17:28:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.662 17:28:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.662 17:28:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.662 17:28:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.662 17:28:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.662 17:28:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.662 17:28:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:31.920 17:28:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:31.920 17:28:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.920 17:28:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.920 17:28:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.920 17:28:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.920 17:28:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.920 17:28:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.920 17:28:10 json_config -- paths/export.sh@5 -- # export PATH 00:03:31.920 17:28:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@51 -- # : 0 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:31.920 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:31.920 17:28:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:31.920 INFO: JSON configuration test init 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:31.920 17:28:10 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:31.920 17:28:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.920 17:28:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.921 17:28:10 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:31.921 17:28:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.921 17:28:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.921 17:28:10 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:31.921 17:28:10 json_config -- json_config/common.sh@9 -- # local app=target 00:03:31.921 17:28:10 json_config -- json_config/common.sh@10 -- # shift 00:03:31.921 17:28:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:31.921 17:28:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:31.921 17:28:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:31.921 17:28:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:31.921 17:28:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:31.921 17:28:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=477932 00:03:31.921 17:28:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:31.921 Waiting for target to run... 00:03:31.921 17:28:10 json_config -- json_config/common.sh@25 -- # waitforlisten 477932 /var/tmp/spdk_tgt.sock 00:03:31.921 17:28:10 json_config -- common/autotest_common.sh@831 -- # '[' -z 477932 ']' 00:03:31.921 17:28:10 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:31.921 17:28:10 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:31.921 17:28:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:31.921 17:28:10 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:31.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:31.921 17:28:10 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:31.921 17:28:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:31.921 [2024-10-17 17:28:10.144995] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:31.921 [2024-10-17 17:28:10.145057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid477932 ] 00:03:32.179 [2024-10-17 17:28:10.437203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:32.179 [2024-10-17 17:28:10.473623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:32.744 17:28:10 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:32.744 17:28:10 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:32.744 17:28:10 json_config -- json_config/common.sh@26 -- # echo '' 00:03:32.744 00:03:32.744 17:28:10 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:32.744 17:28:10 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:32.744 17:28:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:32.744 17:28:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:32.744 17:28:10 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:32.744 17:28:10 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:32.744 17:28:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:32.744 17:28:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:32.744 17:28:11 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:32.744 17:28:11 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:32.744 17:28:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:34.118 17:28:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:34.118 17:28:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:34.118 17:28:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@54 -- # sort 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:34.118 17:28:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:34.118 17:28:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:34.118 17:28:12 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:34.118 17:28:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:34.119 17:28:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:34.119 17:28:12 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:34.119 17:28:12 json_config -- json_config/json_config.sh@240 -- # [[ rdma == \r\d\m\a ]] 00:03:34.119 17:28:12 json_config -- json_config/json_config.sh@241 -- # TEST_TRANSPORT=rdma 00:03:34.119 17:28:12 json_config -- json_config/json_config.sh@241 -- # nvmftestinit 00:03:34.119 17:28:12 json_config -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:03:34.119 17:28:12 json_config -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:03:34.119 17:28:12 json_config -- nvmf/common.sh@474 -- # prepare_net_devs 00:03:34.119 17:28:12 json_config -- nvmf/common.sh@436 -- # local -g is_hw=no 00:03:34.119 17:28:12 json_config -- nvmf/common.sh@438 -- # remove_spdk_ns 00:03:34.119 17:28:12 json_config -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:03:34.119 17:28:12 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:03:34.119 17:28:12 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:03:34.119 17:28:12 json_config -- nvmf/common.sh@440 -- # [[ phy-fallback != virt ]] 00:03:34.119 17:28:12 json_config -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:03:34.119 17:28:12 json_config -- nvmf/common.sh@309 -- # xtrace_disable 00:03:34.119 17:28:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@315 -- # pci_devs=() 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@315 -- # local -a pci_devs 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@316 -- # pci_net_devs=() 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@317 -- # pci_drivers=() 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@317 -- # local -A pci_drivers 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@319 -- # net_devs=() 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@319 -- # local -ga net_devs 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@320 -- # e810=() 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@320 -- # local -ga e810 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@321 -- # x722=() 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@321 -- # local -ga x722 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@322 -- # mlx=() 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@322 -- # local -ga mlx 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:03:42.225 17:28:19 json_config -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:03:42.226 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:03:42.226 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:03:42.226 Found net devices under 0000:18:00.0: mlx_0_0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:03:42.226 Found net devices under 0000:18:00.1: mlx_0_1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@440 -- # is_hw=yes 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@446 -- # rdma_device_init 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@62 -- # uname 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@66 -- # modprobe ib_cm 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@67 -- # modprobe ib_core 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@68 -- # modprobe ib_umad 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@70 -- # modprobe iw_cm 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@528 -- # allocate_nic_ips 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@77 -- # get_rdma_if_list 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@109 -- # continue 2 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@109 -- # continue 2 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:03:42.226 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:03:42.226 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:03:42.226 altname enp24s0f0np0 00:03:42.226 altname ens785f0np0 00:03:42.226 inet 192.168.100.8/24 scope global mlx_0_0 00:03:42.226 valid_lft forever preferred_lft forever 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:03:42.226 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:03:42.226 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:03:42.226 altname enp24s0f1np1 00:03:42.226 altname ens785f1np1 00:03:42.226 inet 192.168.100.9/24 scope global mlx_0_1 00:03:42.226 valid_lft forever preferred_lft forever 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@448 -- # return 0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@90 -- # get_rdma_if_list 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@108 -- # echo mlx_0_0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@109 -- # continue 2 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@108 -- # echo mlx_0_1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@109 -- # continue 2 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # cut -d/ -f1 00:03:42.226 17:28:19 json_config -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@117 -- # awk '{print $4}' 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:03:42.227 192.168.100.9' 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@483 -- # head -n 1 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:03:42.227 192.168.100.9' 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:03:42.227 192.168.100.9' 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@484 -- # tail -n +2 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@484 -- # head -n 1 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:03:42.227 17:28:19 json_config -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:03:42.227 17:28:19 json_config -- json_config/json_config.sh@244 -- # [[ -z 192.168.100.8 ]] 00:03:42.227 17:28:19 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:42.227 17:28:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:42.227 MallocForNvmf0 00:03:42.227 17:28:19 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:42.227 17:28:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:42.227 MallocForNvmf1 00:03:42.227 17:28:19 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:03:42.227 17:28:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:03:42.227 [2024-10-17 17:28:19.985552] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:03:42.227 [2024-10-17 17:28:20.015279] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa30ef0/0x940ec0) succeed. 00:03:42.227 [2024-10-17 17:28:20.027209] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa34140/0x9a9080) succeed. 00:03:42.227 17:28:20 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:42.227 17:28:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:42.227 17:28:20 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:42.227 17:28:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:42.227 17:28:20 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:42.227 17:28:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:42.497 17:28:20 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:03:42.497 17:28:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:03:42.497 [2024-10-17 17:28:20.840316] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:03:42.497 17:28:20 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:42.497 17:28:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.497 17:28:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.781 17:28:20 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:42.781 17:28:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.781 17:28:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.781 17:28:20 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:42.781 17:28:20 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:42.781 17:28:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:42.781 MallocBdevForConfigChangeCheck 00:03:42.781 17:28:21 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:42.781 17:28:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:42.781 17:28:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.069 17:28:21 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:43.069 17:28:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.340 17:28:21 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:43.340 INFO: shutting down applications... 00:03:43.340 17:28:21 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:43.340 17:28:21 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:43.340 17:28:21 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:43.340 17:28:21 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:43.905 Calling clear_iscsi_subsystem 00:03:43.905 Calling clear_nvmf_subsystem 00:03:43.905 Calling clear_nbd_subsystem 00:03:43.905 Calling clear_ublk_subsystem 00:03:43.905 Calling clear_vhost_blk_subsystem 00:03:43.905 Calling clear_vhost_scsi_subsystem 00:03:43.905 Calling clear_bdev_subsystem 00:03:43.905 17:28:22 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:03:43.905 17:28:22 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:43.905 17:28:22 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:43.905 17:28:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:43.905 17:28:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.905 17:28:22 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:44.167 17:28:22 json_config -- json_config/json_config.sh@352 -- # break 00:03:44.168 17:28:22 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:44.168 17:28:22 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:44.168 17:28:22 json_config -- json_config/common.sh@31 -- # local app=target 00:03:44.168 17:28:22 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:44.168 17:28:22 json_config -- json_config/common.sh@35 -- # [[ -n 477932 ]] 00:03:44.168 17:28:22 json_config -- json_config/common.sh@38 -- # kill -SIGINT 477932 00:03:44.168 17:28:22 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:44.168 17:28:22 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:44.168 17:28:22 json_config -- json_config/common.sh@41 -- # kill -0 477932 00:03:44.168 17:28:22 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:44.737 17:28:23 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:44.737 17:28:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:44.737 17:28:23 json_config -- json_config/common.sh@41 -- # kill -0 477932 00:03:44.737 17:28:23 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:44.737 17:28:23 json_config -- json_config/common.sh@43 -- # break 00:03:44.737 17:28:23 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:44.737 17:28:23 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:44.737 SPDK target shutdown done 00:03:44.737 17:28:23 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:44.737 INFO: relaunching applications... 00:03:44.737 17:28:23 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:44.737 17:28:23 json_config -- json_config/common.sh@9 -- # local app=target 00:03:44.737 17:28:23 json_config -- json_config/common.sh@10 -- # shift 00:03:44.737 17:28:23 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:44.737 17:28:23 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:44.737 17:28:23 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:44.737 17:28:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.737 17:28:23 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:44.737 17:28:23 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=481694 00:03:44.737 17:28:23 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:44.737 Waiting for target to run... 00:03:44.737 17:28:23 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:44.737 17:28:23 json_config -- json_config/common.sh@25 -- # waitforlisten 481694 /var/tmp/spdk_tgt.sock 00:03:44.737 17:28:23 json_config -- common/autotest_common.sh@831 -- # '[' -z 481694 ']' 00:03:44.737 17:28:23 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:44.737 17:28:23 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:44.737 17:28:23 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:44.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:44.737 17:28:23 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:44.737 17:28:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:44.737 [2024-10-17 17:28:23.072486] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:44.737 [2024-10-17 17:28:23.072552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid481694 ] 00:03:45.305 [2024-10-17 17:28:23.600335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.305 [2024-10-17 17:28:23.644762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.679 [2024-10-17 17:28:24.795224] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x179cbb0/0x175aff0) succeed. 00:03:46.679 [2024-10-17 17:28:24.806382] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17c4a60/0x17c5540) succeed. 00:03:46.679 [2024-10-17 17:28:24.856241] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:03:46.679 17:28:24 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:46.679 17:28:24 json_config -- common/autotest_common.sh@864 -- # return 0 00:03:46.679 17:28:24 json_config -- json_config/common.sh@26 -- # echo '' 00:03:46.679 00:03:46.679 17:28:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:46.679 17:28:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:46.679 INFO: Checking if target configuration is the same... 00:03:46.679 17:28:24 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:46.679 17:28:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:46.679 17:28:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:46.679 + '[' 2 -ne 2 ']' 00:03:46.679 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:46.679 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:03:46.679 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:46.679 +++ basename /dev/fd/62 00:03:46.679 ++ mktemp /tmp/62.XXX 00:03:46.679 + tmp_file_1=/tmp/62.336 00:03:46.679 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:46.679 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:46.679 + tmp_file_2=/tmp/spdk_tgt_config.json.hSP 00:03:46.679 + ret=0 00:03:46.679 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:46.937 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:46.938 + diff -u /tmp/62.336 /tmp/spdk_tgt_config.json.hSP 00:03:46.938 + echo 'INFO: JSON config files are the same' 00:03:46.938 INFO: JSON config files are the same 00:03:46.938 + rm /tmp/62.336 /tmp/spdk_tgt_config.json.hSP 00:03:46.938 + exit 0 00:03:46.938 17:28:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:46.938 17:28:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:46.938 INFO: changing configuration and checking if this can be detected... 00:03:46.938 17:28:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:46.938 17:28:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:47.196 17:28:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.196 17:28:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:47.196 17:28:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:47.196 + '[' 2 -ne 2 ']' 00:03:47.196 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:47.196 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:03:47.196 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:47.196 +++ basename /dev/fd/62 00:03:47.196 ++ mktemp /tmp/62.XXX 00:03:47.196 + tmp_file_1=/tmp/62.GBM 00:03:47.196 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:47.196 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:47.196 + tmp_file_2=/tmp/spdk_tgt_config.json.mXh 00:03:47.196 + ret=0 00:03:47.196 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:47.454 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:47.712 + diff -u /tmp/62.GBM /tmp/spdk_tgt_config.json.mXh 00:03:47.712 + ret=1 00:03:47.712 + echo '=== Start of file: /tmp/62.GBM ===' 00:03:47.712 + cat /tmp/62.GBM 00:03:47.712 + echo '=== End of file: /tmp/62.GBM ===' 00:03:47.712 + echo '' 00:03:47.712 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mXh ===' 00:03:47.712 + cat /tmp/spdk_tgt_config.json.mXh 00:03:47.712 + echo '=== End of file: /tmp/spdk_tgt_config.json.mXh ===' 00:03:47.712 + echo '' 00:03:47.712 + rm /tmp/62.GBM /tmp/spdk_tgt_config.json.mXh 00:03:47.712 + exit 1 00:03:47.712 17:28:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:47.712 INFO: configuration change detected. 00:03:47.712 17:28:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:47.712 17:28:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:47.712 17:28:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.712 17:28:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.712 17:28:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:47.712 17:28:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:47.712 17:28:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 481694 ]] 00:03:47.712 17:28:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:47.712 17:28:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:47.712 17:28:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.713 17:28:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:47.713 17:28:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:47.713 17:28:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:47.713 17:28:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:47.713 17:28:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:47.713 17:28:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:47.713 17:28:25 json_config -- json_config/json_config.sh@330 -- # killprocess 481694 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@950 -- # '[' -z 481694 ']' 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@954 -- # kill -0 481694 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@955 -- # uname 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 481694 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 481694' 00:03:47.713 killing process with pid 481694 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@969 -- # kill 481694 00:03:47.713 17:28:25 json_config -- common/autotest_common.sh@974 -- # wait 481694 00:03:48.278 17:28:26 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:03:48.278 17:28:26 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:48.278 17:28:26 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:48.278 17:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.278 17:28:26 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:48.278 17:28:26 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:48.278 INFO: Success 00:03:48.278 17:28:26 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:03:48.278 17:28:26 json_config -- nvmf/common.sh@514 -- # nvmfcleanup 00:03:48.278 17:28:26 json_config -- nvmf/common.sh@121 -- # sync 00:03:48.278 17:28:26 json_config -- nvmf/common.sh@123 -- # '[' '' == tcp ']' 00:03:48.278 17:28:26 json_config -- nvmf/common.sh@123 -- # '[' '' == rdma ']' 00:03:48.278 17:28:26 json_config -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:03:48.278 17:28:26 json_config -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:03:48.278 17:28:26 json_config -- nvmf/common.sh@521 -- # [[ '' == \t\c\p ]] 00:03:48.278 00:03:48.278 real 0m16.779s 00:03:48.279 user 0m18.878s 00:03:48.279 sys 0m8.093s 00:03:48.279 17:28:26 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:48.279 17:28:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.279 ************************************ 00:03:48.279 END TEST json_config 00:03:48.279 ************************************ 00:03:48.537 17:28:26 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:48.537 17:28:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:48.537 17:28:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:48.537 17:28:26 -- common/autotest_common.sh@10 -- # set +x 00:03:48.537 ************************************ 00:03:48.537 START TEST json_config_extra_key 00:03:48.537 ************************************ 00:03:48.537 17:28:26 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:48.537 17:28:26 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:48.537 17:28:26 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:03:48.537 17:28:26 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:48.537 17:28:26 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.537 17:28:26 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:48.538 17:28:26 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.538 17:28:26 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.538 17:28:26 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.538 17:28:26 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:48.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.538 --rc genhtml_branch_coverage=1 00:03:48.538 --rc genhtml_function_coverage=1 00:03:48.538 --rc genhtml_legend=1 00:03:48.538 --rc geninfo_all_blocks=1 00:03:48.538 --rc geninfo_unexecuted_blocks=1 00:03:48.538 00:03:48.538 ' 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:48.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.538 --rc genhtml_branch_coverage=1 00:03:48.538 --rc genhtml_function_coverage=1 00:03:48.538 --rc genhtml_legend=1 00:03:48.538 --rc geninfo_all_blocks=1 00:03:48.538 --rc geninfo_unexecuted_blocks=1 00:03:48.538 00:03:48.538 ' 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:48.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.538 --rc genhtml_branch_coverage=1 00:03:48.538 --rc genhtml_function_coverage=1 00:03:48.538 --rc genhtml_legend=1 00:03:48.538 --rc geninfo_all_blocks=1 00:03:48.538 --rc geninfo_unexecuted_blocks=1 00:03:48.538 00:03:48.538 ' 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:48.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.538 --rc genhtml_branch_coverage=1 00:03:48.538 --rc genhtml_function_coverage=1 00:03:48.538 --rc genhtml_legend=1 00:03:48.538 --rc geninfo_all_blocks=1 00:03:48.538 --rc geninfo_unexecuted_blocks=1 00:03:48.538 00:03:48.538 ' 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:48.538 17:28:26 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:48.538 17:28:26 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.538 17:28:26 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.538 17:28:26 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.538 17:28:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.538 17:28:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.538 17:28:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.538 17:28:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:48.538 17:28:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:48.538 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:48.538 17:28:26 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:48.538 INFO: launching applications... 00:03:48.538 17:28:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=482359 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:48.538 Waiting for target to run... 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 482359 /var/tmp/spdk_tgt.sock 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 482359 ']' 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:48.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:48.538 17:28:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:48.538 17:28:26 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:03:48.797 [2024-10-17 17:28:26.946240] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:48.797 [2024-10-17 17:28:26.946306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482359 ] 00:03:49.064 [2024-10-17 17:28:27.272428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.064 [2024-10-17 17:28:27.307564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.634 17:28:27 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:49.634 17:28:27 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:03:49.634 17:28:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:49.634 00:03:49.634 17:28:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:49.634 INFO: shutting down applications... 00:03:49.634 17:28:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:49.634 17:28:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:49.634 17:28:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:49.634 17:28:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 482359 ]] 00:03:49.634 17:28:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 482359 00:03:49.634 17:28:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:49.634 17:28:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:49.634 17:28:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 482359 00:03:49.634 17:28:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:49.892 17:28:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:49.892 17:28:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:49.892 17:28:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 482359 00:03:49.892 17:28:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:49.892 17:28:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:49.892 17:28:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:49.892 17:28:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:49.892 SPDK target shutdown done 00:03:49.892 17:28:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:49.892 Success 00:03:49.892 00:03:49.892 real 0m1.548s 00:03:49.892 user 0m1.281s 00:03:49.892 sys 0m0.475s 00:03:49.892 17:28:28 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.892 17:28:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:49.892 ************************************ 00:03:49.892 END TEST json_config_extra_key 00:03:49.892 ************************************ 00:03:50.151 17:28:28 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:50.151 17:28:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.151 17:28:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.151 17:28:28 -- common/autotest_common.sh@10 -- # set +x 00:03:50.151 ************************************ 00:03:50.151 START TEST alias_rpc 00:03:50.151 ************************************ 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:50.151 * Looking for test storage... 00:03:50.151 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.151 17:28:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:50.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.151 --rc genhtml_branch_coverage=1 00:03:50.151 --rc genhtml_function_coverage=1 00:03:50.151 --rc genhtml_legend=1 00:03:50.151 --rc geninfo_all_blocks=1 00:03:50.151 --rc geninfo_unexecuted_blocks=1 00:03:50.151 00:03:50.151 ' 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:50.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.151 --rc genhtml_branch_coverage=1 00:03:50.151 --rc genhtml_function_coverage=1 00:03:50.151 --rc genhtml_legend=1 00:03:50.151 --rc geninfo_all_blocks=1 00:03:50.151 --rc geninfo_unexecuted_blocks=1 00:03:50.151 00:03:50.151 ' 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:50.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.151 --rc genhtml_branch_coverage=1 00:03:50.151 --rc genhtml_function_coverage=1 00:03:50.151 --rc genhtml_legend=1 00:03:50.151 --rc geninfo_all_blocks=1 00:03:50.151 --rc geninfo_unexecuted_blocks=1 00:03:50.151 00:03:50.151 ' 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:50.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.151 --rc genhtml_branch_coverage=1 00:03:50.151 --rc genhtml_function_coverage=1 00:03:50.151 --rc genhtml_legend=1 00:03:50.151 --rc geninfo_all_blocks=1 00:03:50.151 --rc geninfo_unexecuted_blocks=1 00:03:50.151 00:03:50.151 ' 00:03:50.151 17:28:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:50.151 17:28:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=482602 00:03:50.151 17:28:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 482602 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 482602 ']' 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:50.151 17:28:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:50.151 17:28:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.409 [2024-10-17 17:28:28.578009] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:50.409 [2024-10-17 17:28:28.578074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482602 ] 00:03:50.409 [2024-10-17 17:28:28.649297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.409 [2024-10-17 17:28:28.693634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.668 17:28:28 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:50.668 17:28:28 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:03:50.668 17:28:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:50.926 17:28:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 482602 00:03:50.926 17:28:29 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 482602 ']' 00:03:50.926 17:28:29 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 482602 00:03:50.926 17:28:29 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:03:50.926 17:28:29 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:50.926 17:28:29 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482602 00:03:50.926 17:28:29 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:50.926 17:28:29 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:50.927 17:28:29 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482602' 00:03:50.927 killing process with pid 482602 00:03:50.927 17:28:29 alias_rpc -- common/autotest_common.sh@969 -- # kill 482602 00:03:50.927 17:28:29 alias_rpc -- common/autotest_common.sh@974 -- # wait 482602 00:03:51.185 00:03:51.185 real 0m1.182s 00:03:51.185 user 0m1.131s 00:03:51.185 sys 0m0.469s 00:03:51.185 17:28:29 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:51.185 17:28:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.185 ************************************ 00:03:51.185 END TEST alias_rpc 00:03:51.185 ************************************ 00:03:51.443 17:28:29 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:51.443 17:28:29 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:51.443 17:28:29 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:51.443 17:28:29 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:51.443 17:28:29 -- common/autotest_common.sh@10 -- # set +x 00:03:51.443 ************************************ 00:03:51.443 START TEST spdkcli_tcp 00:03:51.443 ************************************ 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:51.443 * Looking for test storage... 00:03:51.443 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.443 17:28:29 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:51.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.443 --rc genhtml_branch_coverage=1 00:03:51.443 --rc genhtml_function_coverage=1 00:03:51.443 --rc genhtml_legend=1 00:03:51.443 --rc geninfo_all_blocks=1 00:03:51.443 --rc geninfo_unexecuted_blocks=1 00:03:51.443 00:03:51.443 ' 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:51.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.443 --rc genhtml_branch_coverage=1 00:03:51.443 --rc genhtml_function_coverage=1 00:03:51.443 --rc genhtml_legend=1 00:03:51.443 --rc geninfo_all_blocks=1 00:03:51.443 --rc geninfo_unexecuted_blocks=1 00:03:51.443 00:03:51.443 ' 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:51.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.443 --rc genhtml_branch_coverage=1 00:03:51.443 --rc genhtml_function_coverage=1 00:03:51.443 --rc genhtml_legend=1 00:03:51.443 --rc geninfo_all_blocks=1 00:03:51.443 --rc geninfo_unexecuted_blocks=1 00:03:51.443 00:03:51.443 ' 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:51.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.443 --rc genhtml_branch_coverage=1 00:03:51.443 --rc genhtml_function_coverage=1 00:03:51.443 --rc genhtml_legend=1 00:03:51.443 --rc geninfo_all_blocks=1 00:03:51.443 --rc geninfo_unexecuted_blocks=1 00:03:51.443 00:03:51.443 ' 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=482841 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 482841 00:03:51.443 17:28:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 482841 ']' 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.443 17:28:29 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:51.444 17:28:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:51.701 [2024-10-17 17:28:29.871242] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:51.701 [2024-10-17 17:28:29.871301] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482841 ] 00:03:51.701 [2024-10-17 17:28:29.943466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:51.701 [2024-10-17 17:28:29.987211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:51.701 [2024-10-17 17:28:29.987213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:51.959 17:28:30 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:51.959 17:28:30 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:03:51.959 17:28:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:51.959 17:28:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=482897 00:03:51.959 17:28:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:52.217 [ 00:03:52.217 "bdev_malloc_delete", 00:03:52.217 "bdev_malloc_create", 00:03:52.217 "bdev_null_resize", 00:03:52.217 "bdev_null_delete", 00:03:52.217 "bdev_null_create", 00:03:52.217 "bdev_nvme_cuse_unregister", 00:03:52.217 "bdev_nvme_cuse_register", 00:03:52.217 "bdev_opal_new_user", 00:03:52.217 "bdev_opal_set_lock_state", 00:03:52.217 "bdev_opal_delete", 00:03:52.217 "bdev_opal_get_info", 00:03:52.217 "bdev_opal_create", 00:03:52.217 "bdev_nvme_opal_revert", 00:03:52.217 "bdev_nvme_opal_init", 00:03:52.217 "bdev_nvme_send_cmd", 00:03:52.217 "bdev_nvme_set_keys", 00:03:52.217 "bdev_nvme_get_path_iostat", 00:03:52.217 "bdev_nvme_get_mdns_discovery_info", 00:03:52.217 "bdev_nvme_stop_mdns_discovery", 00:03:52.217 "bdev_nvme_start_mdns_discovery", 00:03:52.217 "bdev_nvme_set_multipath_policy", 00:03:52.217 "bdev_nvme_set_preferred_path", 00:03:52.218 "bdev_nvme_get_io_paths", 00:03:52.218 "bdev_nvme_remove_error_injection", 00:03:52.218 "bdev_nvme_add_error_injection", 00:03:52.218 "bdev_nvme_get_discovery_info", 00:03:52.218 "bdev_nvme_stop_discovery", 00:03:52.218 "bdev_nvme_start_discovery", 00:03:52.218 "bdev_nvme_get_controller_health_info", 00:03:52.218 "bdev_nvme_disable_controller", 00:03:52.218 "bdev_nvme_enable_controller", 00:03:52.218 "bdev_nvme_reset_controller", 00:03:52.218 "bdev_nvme_get_transport_statistics", 00:03:52.218 "bdev_nvme_apply_firmware", 00:03:52.218 "bdev_nvme_detach_controller", 00:03:52.218 "bdev_nvme_get_controllers", 00:03:52.218 "bdev_nvme_attach_controller", 00:03:52.218 "bdev_nvme_set_hotplug", 00:03:52.218 "bdev_nvme_set_options", 00:03:52.218 "bdev_passthru_delete", 00:03:52.218 "bdev_passthru_create", 00:03:52.218 "bdev_lvol_set_parent_bdev", 00:03:52.218 "bdev_lvol_set_parent", 00:03:52.218 "bdev_lvol_check_shallow_copy", 00:03:52.218 "bdev_lvol_start_shallow_copy", 00:03:52.218 "bdev_lvol_grow_lvstore", 00:03:52.218 "bdev_lvol_get_lvols", 00:03:52.218 "bdev_lvol_get_lvstores", 00:03:52.218 "bdev_lvol_delete", 00:03:52.218 "bdev_lvol_set_read_only", 00:03:52.218 "bdev_lvol_resize", 00:03:52.218 "bdev_lvol_decouple_parent", 00:03:52.218 "bdev_lvol_inflate", 00:03:52.218 "bdev_lvol_rename", 00:03:52.218 "bdev_lvol_clone_bdev", 00:03:52.218 "bdev_lvol_clone", 00:03:52.218 "bdev_lvol_snapshot", 00:03:52.218 "bdev_lvol_create", 00:03:52.218 "bdev_lvol_delete_lvstore", 00:03:52.218 "bdev_lvol_rename_lvstore", 00:03:52.218 "bdev_lvol_create_lvstore", 00:03:52.218 "bdev_raid_set_options", 00:03:52.218 "bdev_raid_remove_base_bdev", 00:03:52.218 "bdev_raid_add_base_bdev", 00:03:52.218 "bdev_raid_delete", 00:03:52.218 "bdev_raid_create", 00:03:52.218 "bdev_raid_get_bdevs", 00:03:52.218 "bdev_error_inject_error", 00:03:52.218 "bdev_error_delete", 00:03:52.218 "bdev_error_create", 00:03:52.218 "bdev_split_delete", 00:03:52.218 "bdev_split_create", 00:03:52.218 "bdev_delay_delete", 00:03:52.218 "bdev_delay_create", 00:03:52.218 "bdev_delay_update_latency", 00:03:52.218 "bdev_zone_block_delete", 00:03:52.218 "bdev_zone_block_create", 00:03:52.218 "blobfs_create", 00:03:52.218 "blobfs_detect", 00:03:52.218 "blobfs_set_cache_size", 00:03:52.218 "bdev_aio_delete", 00:03:52.218 "bdev_aio_rescan", 00:03:52.218 "bdev_aio_create", 00:03:52.218 "bdev_ftl_set_property", 00:03:52.218 "bdev_ftl_get_properties", 00:03:52.218 "bdev_ftl_get_stats", 00:03:52.218 "bdev_ftl_unmap", 00:03:52.218 "bdev_ftl_unload", 00:03:52.218 "bdev_ftl_delete", 00:03:52.218 "bdev_ftl_load", 00:03:52.218 "bdev_ftl_create", 00:03:52.218 "bdev_virtio_attach_controller", 00:03:52.218 "bdev_virtio_scsi_get_devices", 00:03:52.218 "bdev_virtio_detach_controller", 00:03:52.218 "bdev_virtio_blk_set_hotplug", 00:03:52.218 "bdev_iscsi_delete", 00:03:52.218 "bdev_iscsi_create", 00:03:52.218 "bdev_iscsi_set_options", 00:03:52.218 "accel_error_inject_error", 00:03:52.218 "ioat_scan_accel_module", 00:03:52.218 "dsa_scan_accel_module", 00:03:52.218 "iaa_scan_accel_module", 00:03:52.218 "keyring_file_remove_key", 00:03:52.218 "keyring_file_add_key", 00:03:52.218 "keyring_linux_set_options", 00:03:52.218 "fsdev_aio_delete", 00:03:52.218 "fsdev_aio_create", 00:03:52.218 "iscsi_get_histogram", 00:03:52.218 "iscsi_enable_histogram", 00:03:52.218 "iscsi_set_options", 00:03:52.218 "iscsi_get_auth_groups", 00:03:52.218 "iscsi_auth_group_remove_secret", 00:03:52.218 "iscsi_auth_group_add_secret", 00:03:52.218 "iscsi_delete_auth_group", 00:03:52.218 "iscsi_create_auth_group", 00:03:52.218 "iscsi_set_discovery_auth", 00:03:52.218 "iscsi_get_options", 00:03:52.218 "iscsi_target_node_request_logout", 00:03:52.218 "iscsi_target_node_set_redirect", 00:03:52.218 "iscsi_target_node_set_auth", 00:03:52.218 "iscsi_target_node_add_lun", 00:03:52.218 "iscsi_get_stats", 00:03:52.218 "iscsi_get_connections", 00:03:52.218 "iscsi_portal_group_set_auth", 00:03:52.218 "iscsi_start_portal_group", 00:03:52.218 "iscsi_delete_portal_group", 00:03:52.218 "iscsi_create_portal_group", 00:03:52.218 "iscsi_get_portal_groups", 00:03:52.218 "iscsi_delete_target_node", 00:03:52.218 "iscsi_target_node_remove_pg_ig_maps", 00:03:52.218 "iscsi_target_node_add_pg_ig_maps", 00:03:52.218 "iscsi_create_target_node", 00:03:52.218 "iscsi_get_target_nodes", 00:03:52.218 "iscsi_delete_initiator_group", 00:03:52.218 "iscsi_initiator_group_remove_initiators", 00:03:52.218 "iscsi_initiator_group_add_initiators", 00:03:52.218 "iscsi_create_initiator_group", 00:03:52.218 "iscsi_get_initiator_groups", 00:03:52.218 "nvmf_set_crdt", 00:03:52.218 "nvmf_set_config", 00:03:52.218 "nvmf_set_max_subsystems", 00:03:52.218 "nvmf_stop_mdns_prr", 00:03:52.218 "nvmf_publish_mdns_prr", 00:03:52.218 "nvmf_subsystem_get_listeners", 00:03:52.218 "nvmf_subsystem_get_qpairs", 00:03:52.218 "nvmf_subsystem_get_controllers", 00:03:52.218 "nvmf_get_stats", 00:03:52.218 "nvmf_get_transports", 00:03:52.218 "nvmf_create_transport", 00:03:52.218 "nvmf_get_targets", 00:03:52.218 "nvmf_delete_target", 00:03:52.218 "nvmf_create_target", 00:03:52.218 "nvmf_subsystem_allow_any_host", 00:03:52.218 "nvmf_subsystem_set_keys", 00:03:52.218 "nvmf_subsystem_remove_host", 00:03:52.218 "nvmf_subsystem_add_host", 00:03:52.218 "nvmf_ns_remove_host", 00:03:52.218 "nvmf_ns_add_host", 00:03:52.218 "nvmf_subsystem_remove_ns", 00:03:52.218 "nvmf_subsystem_set_ns_ana_group", 00:03:52.218 "nvmf_subsystem_add_ns", 00:03:52.218 "nvmf_subsystem_listener_set_ana_state", 00:03:52.218 "nvmf_discovery_get_referrals", 00:03:52.218 "nvmf_discovery_remove_referral", 00:03:52.218 "nvmf_discovery_add_referral", 00:03:52.218 "nvmf_subsystem_remove_listener", 00:03:52.218 "nvmf_subsystem_add_listener", 00:03:52.218 "nvmf_delete_subsystem", 00:03:52.218 "nvmf_create_subsystem", 00:03:52.218 "nvmf_get_subsystems", 00:03:52.218 "env_dpdk_get_mem_stats", 00:03:52.218 "nbd_get_disks", 00:03:52.218 "nbd_stop_disk", 00:03:52.218 "nbd_start_disk", 00:03:52.218 "ublk_recover_disk", 00:03:52.218 "ublk_get_disks", 00:03:52.218 "ublk_stop_disk", 00:03:52.218 "ublk_start_disk", 00:03:52.218 "ublk_destroy_target", 00:03:52.218 "ublk_create_target", 00:03:52.218 "virtio_blk_create_transport", 00:03:52.218 "virtio_blk_get_transports", 00:03:52.218 "vhost_controller_set_coalescing", 00:03:52.218 "vhost_get_controllers", 00:03:52.218 "vhost_delete_controller", 00:03:52.218 "vhost_create_blk_controller", 00:03:52.218 "vhost_scsi_controller_remove_target", 00:03:52.218 "vhost_scsi_controller_add_target", 00:03:52.218 "vhost_start_scsi_controller", 00:03:52.218 "vhost_create_scsi_controller", 00:03:52.218 "thread_set_cpumask", 00:03:52.218 "scheduler_set_options", 00:03:52.218 "framework_get_governor", 00:03:52.218 "framework_get_scheduler", 00:03:52.218 "framework_set_scheduler", 00:03:52.218 "framework_get_reactors", 00:03:52.218 "thread_get_io_channels", 00:03:52.218 "thread_get_pollers", 00:03:52.218 "thread_get_stats", 00:03:52.218 "framework_monitor_context_switch", 00:03:52.218 "spdk_kill_instance", 00:03:52.218 "log_enable_timestamps", 00:03:52.218 "log_get_flags", 00:03:52.218 "log_clear_flag", 00:03:52.218 "log_set_flag", 00:03:52.218 "log_get_level", 00:03:52.218 "log_set_level", 00:03:52.218 "log_get_print_level", 00:03:52.218 "log_set_print_level", 00:03:52.218 "framework_enable_cpumask_locks", 00:03:52.218 "framework_disable_cpumask_locks", 00:03:52.218 "framework_wait_init", 00:03:52.218 "framework_start_init", 00:03:52.218 "scsi_get_devices", 00:03:52.218 "bdev_get_histogram", 00:03:52.218 "bdev_enable_histogram", 00:03:52.218 "bdev_set_qos_limit", 00:03:52.218 "bdev_set_qd_sampling_period", 00:03:52.218 "bdev_get_bdevs", 00:03:52.218 "bdev_reset_iostat", 00:03:52.218 "bdev_get_iostat", 00:03:52.218 "bdev_examine", 00:03:52.218 "bdev_wait_for_examine", 00:03:52.218 "bdev_set_options", 00:03:52.218 "accel_get_stats", 00:03:52.218 "accel_set_options", 00:03:52.218 "accel_set_driver", 00:03:52.218 "accel_crypto_key_destroy", 00:03:52.218 "accel_crypto_keys_get", 00:03:52.218 "accel_crypto_key_create", 00:03:52.218 "accel_assign_opc", 00:03:52.218 "accel_get_module_info", 00:03:52.218 "accel_get_opc_assignments", 00:03:52.218 "vmd_rescan", 00:03:52.218 "vmd_remove_device", 00:03:52.218 "vmd_enable", 00:03:52.218 "sock_get_default_impl", 00:03:52.218 "sock_set_default_impl", 00:03:52.218 "sock_impl_set_options", 00:03:52.218 "sock_impl_get_options", 00:03:52.218 "iobuf_get_stats", 00:03:52.218 "iobuf_set_options", 00:03:52.218 "keyring_get_keys", 00:03:52.218 "framework_get_pci_devices", 00:03:52.218 "framework_get_config", 00:03:52.218 "framework_get_subsystems", 00:03:52.218 "fsdev_set_opts", 00:03:52.218 "fsdev_get_opts", 00:03:52.218 "trace_get_info", 00:03:52.218 "trace_get_tpoint_group_mask", 00:03:52.218 "trace_disable_tpoint_group", 00:03:52.218 "trace_enable_tpoint_group", 00:03:52.218 "trace_clear_tpoint_mask", 00:03:52.218 "trace_set_tpoint_mask", 00:03:52.218 "notify_get_notifications", 00:03:52.218 "notify_get_types", 00:03:52.218 "spdk_get_version", 00:03:52.218 "rpc_get_methods" 00:03:52.218 ] 00:03:52.218 17:28:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:52.218 17:28:30 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:52.218 17:28:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:52.218 17:28:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:52.218 17:28:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 482841 00:03:52.218 17:28:30 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 482841 ']' 00:03:52.218 17:28:30 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 482841 00:03:52.218 17:28:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:03:52.219 17:28:30 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:52.219 17:28:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482841 00:03:52.219 17:28:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:52.219 17:28:30 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:52.219 17:28:30 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482841' 00:03:52.219 killing process with pid 482841 00:03:52.219 17:28:30 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 482841 00:03:52.219 17:28:30 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 482841 00:03:52.784 00:03:52.784 real 0m1.253s 00:03:52.784 user 0m2.077s 00:03:52.784 sys 0m0.503s 00:03:52.784 17:28:30 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.784 17:28:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:52.784 ************************************ 00:03:52.784 END TEST spdkcli_tcp 00:03:52.784 ************************************ 00:03:52.784 17:28:30 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:52.784 17:28:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.784 17:28:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.785 17:28:30 -- common/autotest_common.sh@10 -- # set +x 00:03:52.785 ************************************ 00:03:52.785 START TEST dpdk_mem_utility 00:03:52.785 ************************************ 00:03:52.785 17:28:30 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:52.785 * Looking for test storage... 00:03:52.785 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.785 17:28:31 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:52.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.785 --rc genhtml_branch_coverage=1 00:03:52.785 --rc genhtml_function_coverage=1 00:03:52.785 --rc genhtml_legend=1 00:03:52.785 --rc geninfo_all_blocks=1 00:03:52.785 --rc geninfo_unexecuted_blocks=1 00:03:52.785 00:03:52.785 ' 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:52.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.785 --rc genhtml_branch_coverage=1 00:03:52.785 --rc genhtml_function_coverage=1 00:03:52.785 --rc genhtml_legend=1 00:03:52.785 --rc geninfo_all_blocks=1 00:03:52.785 --rc geninfo_unexecuted_blocks=1 00:03:52.785 00:03:52.785 ' 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:52.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.785 --rc genhtml_branch_coverage=1 00:03:52.785 --rc genhtml_function_coverage=1 00:03:52.785 --rc genhtml_legend=1 00:03:52.785 --rc geninfo_all_blocks=1 00:03:52.785 --rc geninfo_unexecuted_blocks=1 00:03:52.785 00:03:52.785 ' 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:52.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.785 --rc genhtml_branch_coverage=1 00:03:52.785 --rc genhtml_function_coverage=1 00:03:52.785 --rc genhtml_legend=1 00:03:52.785 --rc geninfo_all_blocks=1 00:03:52.785 --rc geninfo_unexecuted_blocks=1 00:03:52.785 00:03:52.785 ' 00:03:52.785 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:52.785 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=483099 00:03:52.785 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 483099 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 483099 ']' 00:03:52.785 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:52.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:52.785 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:53.043 [2024-10-17 17:28:31.190927] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:53.043 [2024-10-17 17:28:31.190995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483099 ] 00:03:53.043 [2024-10-17 17:28:31.262598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.043 [2024-10-17 17:28:31.307057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.301 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:53.301 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:03:53.301 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:53.301 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:53.301 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:53.301 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:53.301 { 00:03:53.301 "filename": "/tmp/spdk_mem_dump.txt" 00:03:53.301 } 00:03:53.301 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:53.301 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:53.301 DPDK memory size 810.000000 MiB in 1 heap(s) 00:03:53.301 1 heaps totaling size 810.000000 MiB 00:03:53.301 size: 810.000000 MiB heap id: 0 00:03:53.301 end heaps---------- 00:03:53.301 9 mempools totaling size 595.772034 MiB 00:03:53.301 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:53.301 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:53.301 size: 92.545471 MiB name: bdev_io_483099 00:03:53.301 size: 50.003479 MiB name: msgpool_483099 00:03:53.301 size: 36.509338 MiB name: fsdev_io_483099 00:03:53.301 size: 21.763794 MiB name: PDU_Pool 00:03:53.301 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:53.301 size: 4.133484 MiB name: evtpool_483099 00:03:53.301 size: 0.026123 MiB name: Session_Pool 00:03:53.301 end mempools------- 00:03:53.301 6 memzones totaling size 4.142822 MiB 00:03:53.301 size: 1.000366 MiB name: RG_ring_0_483099 00:03:53.301 size: 1.000366 MiB name: RG_ring_1_483099 00:03:53.301 size: 1.000366 MiB name: RG_ring_4_483099 00:03:53.301 size: 1.000366 MiB name: RG_ring_5_483099 00:03:53.301 size: 0.125366 MiB name: RG_ring_2_483099 00:03:53.301 size: 0.015991 MiB name: RG_ring_3_483099 00:03:53.301 end memzones------- 00:03:53.301 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:53.301 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:53.301 list of free elements. size: 10.862488 MiB 00:03:53.301 element at address: 0x200018a00000 with size: 0.999878 MiB 00:03:53.301 element at address: 0x200018c00000 with size: 0.999878 MiB 00:03:53.301 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:53.301 element at address: 0x200031800000 with size: 0.994446 MiB 00:03:53.301 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:53.301 element at address: 0x200012c00000 with size: 0.954285 MiB 00:03:53.301 element at address: 0x200018e00000 with size: 0.936584 MiB 00:03:53.301 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:53.301 element at address: 0x20001a600000 with size: 0.582886 MiB 00:03:53.301 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:53.301 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:53.302 element at address: 0x200019000000 with size: 0.485657 MiB 00:03:53.302 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:53.302 element at address: 0x200027a00000 with size: 0.410034 MiB 00:03:53.302 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:53.302 list of standard malloc elements. size: 199.218628 MiB 00:03:53.302 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:53.302 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:53.302 element at address: 0x200018afff80 with size: 1.000122 MiB 00:03:53.302 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:03:53.302 element at address: 0x200018efff80 with size: 1.000122 MiB 00:03:53.302 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:53.302 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:03:53.302 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:53.302 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:03:53.302 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:03:53.302 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20001a695380 with size: 0.000183 MiB 00:03:53.302 element at address: 0x20001a695440 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200027a69040 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:03:53.302 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:03:53.302 list of memzone associated elements. size: 599.918884 MiB 00:03:53.302 element at address: 0x20001a695500 with size: 211.416748 MiB 00:03:53.302 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:53.302 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:03:53.302 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:53.302 element at address: 0x200012df4780 with size: 92.045044 MiB 00:03:53.302 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_483099_0 00:03:53.302 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:53.302 associated memzone info: size: 48.002930 MiB name: MP_msgpool_483099_0 00:03:53.302 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:53.302 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_483099_0 00:03:53.302 element at address: 0x2000191be940 with size: 20.255554 MiB 00:03:53.302 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:53.302 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:03:53.302 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:53.302 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:53.302 associated memzone info: size: 3.000122 MiB name: MP_evtpool_483099_0 00:03:53.302 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:53.302 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_483099 00:03:53.302 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:53.302 associated memzone info: size: 1.007996 MiB name: MP_evtpool_483099 00:03:53.302 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:53.302 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:53.302 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:03:53.302 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:53.302 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:53.302 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:53.302 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:53.302 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:53.302 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:53.302 associated memzone info: size: 1.000366 MiB name: RG_ring_0_483099 00:03:53.302 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:53.302 associated memzone info: size: 1.000366 MiB name: RG_ring_1_483099 00:03:53.302 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:03:53.302 associated memzone info: size: 1.000366 MiB name: RG_ring_4_483099 00:03:53.302 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:03:53.302 associated memzone info: size: 1.000366 MiB name: RG_ring_5_483099 00:03:53.302 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:53.302 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_483099 00:03:53.302 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:53.302 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_483099 00:03:53.302 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:53.302 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:53.302 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:53.302 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:53.302 element at address: 0x20001907c540 with size: 0.250488 MiB 00:03:53.302 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:53.302 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:53.302 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_483099 00:03:53.302 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:53.302 associated memzone info: size: 0.125366 MiB name: RG_ring_2_483099 00:03:53.302 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:53.302 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:53.302 element at address: 0x200027a69100 with size: 0.023743 MiB 00:03:53.302 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:53.302 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:53.302 associated memzone info: size: 0.015991 MiB name: RG_ring_3_483099 00:03:53.302 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:03:53.302 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:53.302 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:53.302 associated memzone info: size: 0.000183 MiB name: MP_msgpool_483099 00:03:53.302 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:53.302 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_483099 00:03:53.302 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:53.302 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_483099 00:03:53.302 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:03:53.302 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:53.302 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:53.302 17:28:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 483099 00:03:53.302 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 483099 ']' 00:03:53.302 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 483099 00:03:53.302 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:03:53.302 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:53.302 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483099 00:03:53.560 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:53.561 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:53.561 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483099' 00:03:53.561 killing process with pid 483099 00:03:53.561 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 483099 00:03:53.561 17:28:31 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 483099 00:03:53.819 00:03:53.819 real 0m1.079s 00:03:53.819 user 0m0.972s 00:03:53.819 sys 0m0.460s 00:03:53.819 17:28:32 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:53.819 17:28:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:53.819 ************************************ 00:03:53.819 END TEST dpdk_mem_utility 00:03:53.819 ************************************ 00:03:53.819 17:28:32 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:03:53.819 17:28:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:53.819 17:28:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:53.819 17:28:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.819 ************************************ 00:03:53.819 START TEST event 00:03:53.819 ************************************ 00:03:53.819 17:28:32 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:03:54.077 * Looking for test storage... 00:03:54.077 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:03:54.077 17:28:32 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:54.077 17:28:32 event -- common/autotest_common.sh@1691 -- # lcov --version 00:03:54.077 17:28:32 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:54.077 17:28:32 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:54.077 17:28:32 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.077 17:28:32 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.077 17:28:32 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.077 17:28:32 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.077 17:28:32 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.077 17:28:32 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.077 17:28:32 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.077 17:28:32 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.077 17:28:32 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.077 17:28:32 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.077 17:28:32 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.077 17:28:32 event -- scripts/common.sh@344 -- # case "$op" in 00:03:54.077 17:28:32 event -- scripts/common.sh@345 -- # : 1 00:03:54.077 17:28:32 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.078 17:28:32 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.078 17:28:32 event -- scripts/common.sh@365 -- # decimal 1 00:03:54.078 17:28:32 event -- scripts/common.sh@353 -- # local d=1 00:03:54.078 17:28:32 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.078 17:28:32 event -- scripts/common.sh@355 -- # echo 1 00:03:54.078 17:28:32 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.078 17:28:32 event -- scripts/common.sh@366 -- # decimal 2 00:03:54.078 17:28:32 event -- scripts/common.sh@353 -- # local d=2 00:03:54.078 17:28:32 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.078 17:28:32 event -- scripts/common.sh@355 -- # echo 2 00:03:54.078 17:28:32 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.078 17:28:32 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.078 17:28:32 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.078 17:28:32 event -- scripts/common.sh@368 -- # return 0 00:03:54.078 17:28:32 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.078 17:28:32 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:54.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.078 --rc genhtml_branch_coverage=1 00:03:54.078 --rc genhtml_function_coverage=1 00:03:54.078 --rc genhtml_legend=1 00:03:54.078 --rc geninfo_all_blocks=1 00:03:54.078 --rc geninfo_unexecuted_blocks=1 00:03:54.078 00:03:54.078 ' 00:03:54.078 17:28:32 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:54.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.078 --rc genhtml_branch_coverage=1 00:03:54.078 --rc genhtml_function_coverage=1 00:03:54.078 --rc genhtml_legend=1 00:03:54.078 --rc geninfo_all_blocks=1 00:03:54.078 --rc geninfo_unexecuted_blocks=1 00:03:54.078 00:03:54.078 ' 00:03:54.078 17:28:32 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:54.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.078 --rc genhtml_branch_coverage=1 00:03:54.078 --rc genhtml_function_coverage=1 00:03:54.078 --rc genhtml_legend=1 00:03:54.078 --rc geninfo_all_blocks=1 00:03:54.078 --rc geninfo_unexecuted_blocks=1 00:03:54.078 00:03:54.078 ' 00:03:54.078 17:28:32 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:54.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.078 --rc genhtml_branch_coverage=1 00:03:54.078 --rc genhtml_function_coverage=1 00:03:54.078 --rc genhtml_legend=1 00:03:54.078 --rc geninfo_all_blocks=1 00:03:54.078 --rc geninfo_unexecuted_blocks=1 00:03:54.078 00:03:54.078 ' 00:03:54.078 17:28:32 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:54.078 17:28:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:54.078 17:28:32 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:54.078 17:28:32 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:03:54.078 17:28:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.078 17:28:32 event -- common/autotest_common.sh@10 -- # set +x 00:03:54.078 ************************************ 00:03:54.078 START TEST event_perf 00:03:54.078 ************************************ 00:03:54.078 17:28:32 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:54.078 Running I/O for 1 seconds...[2024-10-17 17:28:32.377120] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:54.078 [2024-10-17 17:28:32.377196] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483348 ] 00:03:54.078 [2024-10-17 17:28:32.452520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:54.335 [2024-10-17 17:28:32.499787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:54.335 [2024-10-17 17:28:32.499874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:54.336 [2024-10-17 17:28:32.499948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:54.336 [2024-10-17 17:28:32.499949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.270 Running I/O for 1 seconds... 00:03:55.270 lcore 0: 206044 00:03:55.270 lcore 1: 206043 00:03:55.270 lcore 2: 206043 00:03:55.270 lcore 3: 206043 00:03:55.270 done. 00:03:55.270 00:03:55.270 real 0m1.189s 00:03:55.270 user 0m4.096s 00:03:55.270 sys 0m0.089s 00:03:55.270 17:28:33 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:55.270 17:28:33 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:55.270 ************************************ 00:03:55.270 END TEST event_perf 00:03:55.270 ************************************ 00:03:55.270 17:28:33 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:55.270 17:28:33 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:03:55.270 17:28:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:55.270 17:28:33 event -- common/autotest_common.sh@10 -- # set +x 00:03:55.270 ************************************ 00:03:55.270 START TEST event_reactor 00:03:55.270 ************************************ 00:03:55.270 17:28:33 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:55.270 [2024-10-17 17:28:33.650679] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:55.270 [2024-10-17 17:28:33.650762] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483557 ] 00:03:55.528 [2024-10-17 17:28:33.725220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.528 [2024-10-17 17:28:33.768420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:56.461 test_start 00:03:56.461 oneshot 00:03:56.461 tick 100 00:03:56.461 tick 100 00:03:56.461 tick 250 00:03:56.461 tick 100 00:03:56.461 tick 100 00:03:56.461 tick 100 00:03:56.461 tick 250 00:03:56.461 tick 500 00:03:56.461 tick 100 00:03:56.461 tick 100 00:03:56.461 tick 250 00:03:56.461 tick 100 00:03:56.461 tick 100 00:03:56.461 test_end 00:03:56.461 00:03:56.461 real 0m1.183s 00:03:56.461 user 0m1.105s 00:03:56.461 sys 0m0.074s 00:03:56.461 17:28:34 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.461 17:28:34 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:03:56.461 ************************************ 00:03:56.461 END TEST event_reactor 00:03:56.461 ************************************ 00:03:56.461 17:28:34 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:56.461 17:28:34 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:03:56.461 17:28:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.461 17:28:34 event -- common/autotest_common.sh@10 -- # set +x 00:03:56.719 ************************************ 00:03:56.719 START TEST event_reactor_perf 00:03:56.719 ************************************ 00:03:56.719 17:28:34 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:03:56.719 [2024-10-17 17:28:34.913188] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:56.719 [2024-10-17 17:28:34.913280] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483755 ] 00:03:56.719 [2024-10-17 17:28:34.986262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.719 [2024-10-17 17:28:35.029546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.092 test_start 00:03:58.092 test_end 00:03:58.092 Performance: 508337 events per second 00:03:58.092 00:03:58.092 real 0m1.183s 00:03:58.092 user 0m1.091s 00:03:58.092 sys 0m0.088s 00:03:58.092 17:28:36 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.092 17:28:36 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:03:58.092 ************************************ 00:03:58.092 END TEST event_reactor_perf 00:03:58.092 ************************************ 00:03:58.092 17:28:36 event -- event/event.sh@49 -- # uname -s 00:03:58.092 17:28:36 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:03:58.092 17:28:36 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:58.092 17:28:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.092 17:28:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.092 17:28:36 event -- common/autotest_common.sh@10 -- # set +x 00:03:58.092 ************************************ 00:03:58.092 START TEST event_scheduler 00:03:58.092 ************************************ 00:03:58.092 17:28:36 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:03:58.092 * Looking for test storage... 00:03:58.092 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:03:58.092 17:28:36 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:58.092 17:28:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:03:58.092 17:28:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:58.092 17:28:36 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:58.092 17:28:36 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:58.092 17:28:36 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:58.092 17:28:36 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:58.092 17:28:36 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:03:58.092 17:28:36 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:03:58.092 17:28:36 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:03:58.092 17:28:36 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:03:58.092 17:28:36 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:58.093 17:28:36 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.093 --rc genhtml_branch_coverage=1 00:03:58.093 --rc genhtml_function_coverage=1 00:03:58.093 --rc genhtml_legend=1 00:03:58.093 --rc geninfo_all_blocks=1 00:03:58.093 --rc geninfo_unexecuted_blocks=1 00:03:58.093 00:03:58.093 ' 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.093 --rc genhtml_branch_coverage=1 00:03:58.093 --rc genhtml_function_coverage=1 00:03:58.093 --rc genhtml_legend=1 00:03:58.093 --rc geninfo_all_blocks=1 00:03:58.093 --rc geninfo_unexecuted_blocks=1 00:03:58.093 00:03:58.093 ' 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.093 --rc genhtml_branch_coverage=1 00:03:58.093 --rc genhtml_function_coverage=1 00:03:58.093 --rc genhtml_legend=1 00:03:58.093 --rc geninfo_all_blocks=1 00:03:58.093 --rc geninfo_unexecuted_blocks=1 00:03:58.093 00:03:58.093 ' 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:58.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:58.093 --rc genhtml_branch_coverage=1 00:03:58.093 --rc genhtml_function_coverage=1 00:03:58.093 --rc genhtml_legend=1 00:03:58.093 --rc geninfo_all_blocks=1 00:03:58.093 --rc geninfo_unexecuted_blocks=1 00:03:58.093 00:03:58.093 ' 00:03:58.093 17:28:36 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:03:58.093 17:28:36 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=483994 00:03:58.093 17:28:36 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.093 17:28:36 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:03:58.093 17:28:36 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 483994 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 483994 ']' 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:58.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:58.093 17:28:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.093 [2024-10-17 17:28:36.390494] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:03:58.093 [2024-10-17 17:28:36.390553] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid483994 ] 00:03:58.093 [2024-10-17 17:28:36.457504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:58.351 [2024-10-17 17:28:36.503588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.351 [2024-10-17 17:28:36.503667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:58.351 [2024-10-17 17:28:36.503740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:58.351 [2024-10-17 17:28:36.503742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:03:58.351 17:28:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.351 [2024-10-17 17:28:36.556364] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:03:58.351 [2024-10-17 17:28:36.556385] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:03:58.351 [2024-10-17 17:28:36.556396] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:03:58.351 [2024-10-17 17:28:36.556404] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:03:58.351 [2024-10-17 17:28:36.556411] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.351 17:28:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.351 [2024-10-17 17:28:36.633331] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.351 17:28:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.351 17:28:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:03:58.351 ************************************ 00:03:58.352 START TEST scheduler_create_thread 00:03:58.352 ************************************ 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.352 2 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.352 3 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.352 4 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.352 5 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.352 6 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.352 7 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.352 8 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.352 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.609 9 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.609 10 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:58.609 17:28:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:03:59.173 17:28:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:03:59.173 17:28:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:03:59.173 17:28:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:03:59.173 17:28:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:00.546 17:28:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:00.546 17:28:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:00.546 17:28:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:00.546 17:28:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:00.546 17:28:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:01.478 17:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:01.478 00:04:01.478 real 0m3.101s 00:04:01.478 user 0m0.026s 00:04:01.478 sys 0m0.006s 00:04:01.478 17:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.478 17:28:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:01.478 ************************************ 00:04:01.478 END TEST scheduler_create_thread 00:04:01.478 ************************************ 00:04:01.478 17:28:39 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:01.478 17:28:39 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 483994 00:04:01.478 17:28:39 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 483994 ']' 00:04:01.478 17:28:39 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 483994 00:04:01.478 17:28:39 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:01.478 17:28:39 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:01.478 17:28:39 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 483994 00:04:01.735 17:28:39 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:01.735 17:28:39 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:01.735 17:28:39 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 483994' 00:04:01.735 killing process with pid 483994 00:04:01.735 17:28:39 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 483994 00:04:01.735 17:28:39 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 483994 00:04:01.994 [2024-10-17 17:28:40.152593] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:01.994 00:04:01.994 real 0m4.207s 00:04:01.994 user 0m6.719s 00:04:01.994 sys 0m0.434s 00:04:01.994 17:28:40 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.994 17:28:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:01.994 ************************************ 00:04:01.994 END TEST event_scheduler 00:04:01.994 ************************************ 00:04:02.253 17:28:40 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:02.253 17:28:40 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:02.253 17:28:40 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:02.253 17:28:40 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:02.253 17:28:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:02.253 ************************************ 00:04:02.253 START TEST app_repeat 00:04:02.253 ************************************ 00:04:02.253 17:28:40 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@19 -- # repeat_pid=484588 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 484588' 00:04:02.253 Process app_repeat pid: 484588 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:02.253 spdk_app_start Round 0 00:04:02.253 17:28:40 event.app_repeat -- event/event.sh@25 -- # waitforlisten 484588 /var/tmp/spdk-nbd.sock 00:04:02.253 17:28:40 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 484588 ']' 00:04:02.253 17:28:40 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:02.253 17:28:40 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:02.253 17:28:40 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:02.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:02.253 17:28:40 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:02.253 17:28:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:02.253 [2024-10-17 17:28:40.471948] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:02.253 [2024-10-17 17:28:40.472013] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid484588 ] 00:04:02.253 [2024-10-17 17:28:40.547359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:02.253 [2024-10-17 17:28:40.594750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.253 [2024-10-17 17:28:40.594752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.511 17:28:40 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:02.511 17:28:40 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:02.511 17:28:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:02.511 Malloc0 00:04:02.511 17:28:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:02.769 Malloc1 00:04:02.769 17:28:41 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:02.769 17:28:41 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.769 17:28:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:02.769 17:28:41 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:02.769 17:28:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.769 17:28:41 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:02.769 17:28:41 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:02.769 17:28:41 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:02.770 17:28:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:02.770 17:28:41 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:02.770 17:28:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:02.770 17:28:41 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:02.770 17:28:41 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:02.770 17:28:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:02.770 17:28:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:02.770 17:28:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:03.028 /dev/nbd0 00:04:03.028 17:28:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:03.028 17:28:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:03.028 1+0 records in 00:04:03.028 1+0 records out 00:04:03.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251862 s, 16.3 MB/s 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:03.028 17:28:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:03.028 17:28:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:03.028 17:28:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:03.028 17:28:41 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:03.286 /dev/nbd1 00:04:03.286 17:28:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:03.286 17:28:41 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:03.286 1+0 records in 00:04:03.286 1+0 records out 00:04:03.286 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243349 s, 16.8 MB/s 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:03.286 17:28:41 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:03.286 17:28:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:03.286 17:28:41 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:03.286 17:28:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:03.286 17:28:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.286 17:28:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:03.544 { 00:04:03.544 "nbd_device": "/dev/nbd0", 00:04:03.544 "bdev_name": "Malloc0" 00:04:03.544 }, 00:04:03.544 { 00:04:03.544 "nbd_device": "/dev/nbd1", 00:04:03.544 "bdev_name": "Malloc1" 00:04:03.544 } 00:04:03.544 ]' 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:03.544 { 00:04:03.544 "nbd_device": "/dev/nbd0", 00:04:03.544 "bdev_name": "Malloc0" 00:04:03.544 }, 00:04:03.544 { 00:04:03.544 "nbd_device": "/dev/nbd1", 00:04:03.544 "bdev_name": "Malloc1" 00:04:03.544 } 00:04:03.544 ]' 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:03.544 /dev/nbd1' 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:03.544 /dev/nbd1' 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:03.544 17:28:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:03.545 256+0 records in 00:04:03.545 256+0 records out 00:04:03.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00652434 s, 161 MB/s 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:03.545 256+0 records in 00:04:03.545 256+0 records out 00:04:03.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019503 s, 53.8 MB/s 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:03.545 256+0 records in 00:04:03.545 256+0 records out 00:04:03.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204713 s, 51.2 MB/s 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:03.545 17:28:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:03.803 17:28:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:04.061 17:28:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:04.319 17:28:42 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:04.319 17:28:42 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:04.578 17:28:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:04.836 [2024-10-17 17:28:42.996373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:04.836 [2024-10-17 17:28:43.038998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.836 [2024-10-17 17:28:43.039000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.836 [2024-10-17 17:28:43.086486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:04.836 [2024-10-17 17:28:43.086541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:08.121 17:28:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:08.121 17:28:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:08.121 spdk_app_start Round 1 00:04:08.121 17:28:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 484588 /var/tmp/spdk-nbd.sock 00:04:08.121 17:28:45 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 484588 ']' 00:04:08.121 17:28:45 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:08.121 17:28:45 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:08.121 17:28:45 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:08.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:08.121 17:28:45 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:08.121 17:28:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:08.121 17:28:46 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:08.121 17:28:46 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:08.121 17:28:46 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:08.121 Malloc0 00:04:08.121 17:28:46 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:08.121 Malloc1 00:04:08.121 17:28:46 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:08.121 17:28:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:08.380 /dev/nbd0 00:04:08.380 17:28:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:08.380 17:28:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:08.380 1+0 records in 00:04:08.380 1+0 records out 00:04:08.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224691 s, 18.2 MB/s 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:08.380 17:28:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:08.380 17:28:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:08.380 17:28:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:08.380 17:28:46 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:08.639 /dev/nbd1 00:04:08.639 17:28:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:08.639 17:28:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:08.639 1+0 records in 00:04:08.639 1+0 records out 00:04:08.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223872 s, 18.3 MB/s 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:08.639 17:28:46 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:08.640 17:28:46 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:08.640 17:28:46 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:08.640 17:28:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:08.640 17:28:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:08.640 17:28:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:08.640 17:28:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.640 17:28:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:08.898 { 00:04:08.898 "nbd_device": "/dev/nbd0", 00:04:08.898 "bdev_name": "Malloc0" 00:04:08.898 }, 00:04:08.898 { 00:04:08.898 "nbd_device": "/dev/nbd1", 00:04:08.898 "bdev_name": "Malloc1" 00:04:08.898 } 00:04:08.898 ]' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:08.898 { 00:04:08.898 "nbd_device": "/dev/nbd0", 00:04:08.898 "bdev_name": "Malloc0" 00:04:08.898 }, 00:04:08.898 { 00:04:08.898 "nbd_device": "/dev/nbd1", 00:04:08.898 "bdev_name": "Malloc1" 00:04:08.898 } 00:04:08.898 ]' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:08.898 /dev/nbd1' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:08.898 /dev/nbd1' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:08.898 256+0 records in 00:04:08.898 256+0 records out 00:04:08.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0113226 s, 92.6 MB/s 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:08.898 256+0 records in 00:04:08.898 256+0 records out 00:04:08.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.019932 s, 52.6 MB/s 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:08.898 256+0 records in 00:04:08.898 256+0 records out 00:04:08.898 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207483 s, 50.5 MB/s 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:08.898 17:28:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:09.156 17:28:47 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:09.415 17:28:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:09.673 17:28:47 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:09.673 17:28:47 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:09.931 17:28:48 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:09.931 [2024-10-17 17:28:48.300065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:10.189 [2024-10-17 17:28:48.344374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.189 [2024-10-17 17:28:48.344376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.189 [2024-10-17 17:28:48.392300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:10.189 [2024-10-17 17:28:48.392355] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:13.475 17:28:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:13.475 17:28:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:13.475 spdk_app_start Round 2 00:04:13.475 17:28:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 484588 /var/tmp/spdk-nbd.sock 00:04:13.475 17:28:51 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 484588 ']' 00:04:13.475 17:28:51 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:13.475 17:28:51 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:13.475 17:28:51 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:13.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:13.475 17:28:51 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:13.475 17:28:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:13.475 17:28:51 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:13.475 17:28:51 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:13.475 17:28:51 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:13.475 Malloc0 00:04:13.475 17:28:51 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:13.475 Malloc1 00:04:13.475 17:28:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:13.475 17:28:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:13.476 17:28:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:13.476 17:28:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.476 17:28:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:13.734 /dev/nbd0 00:04:13.734 17:28:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:13.734 17:28:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.734 1+0 records in 00:04:13.734 1+0 records out 00:04:13.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213072 s, 19.2 MB/s 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:13.734 17:28:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:13.734 17:28:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.734 17:28:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.734 17:28:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:13.993 /dev/nbd1 00:04:13.993 17:28:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:13.993 17:28:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:13.993 1+0 records in 00:04:13.993 1+0 records out 00:04:13.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284244 s, 14.4 MB/s 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:13.993 17:28:52 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:13.993 17:28:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:13.993 17:28:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:13.993 17:28:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.993 17:28:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.993 17:28:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:14.251 { 00:04:14.251 "nbd_device": "/dev/nbd0", 00:04:14.251 "bdev_name": "Malloc0" 00:04:14.251 }, 00:04:14.251 { 00:04:14.251 "nbd_device": "/dev/nbd1", 00:04:14.251 "bdev_name": "Malloc1" 00:04:14.251 } 00:04:14.251 ]' 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:14.251 { 00:04:14.251 "nbd_device": "/dev/nbd0", 00:04:14.251 "bdev_name": "Malloc0" 00:04:14.251 }, 00:04:14.251 { 00:04:14.251 "nbd_device": "/dev/nbd1", 00:04:14.251 "bdev_name": "Malloc1" 00:04:14.251 } 00:04:14.251 ]' 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:14.251 /dev/nbd1' 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:14.251 /dev/nbd1' 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:14.251 256+0 records in 00:04:14.251 256+0 records out 00:04:14.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109464 s, 95.8 MB/s 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:14.251 256+0 records in 00:04:14.251 256+0 records out 00:04:14.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195981 s, 53.5 MB/s 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:14.251 256+0 records in 00:04:14.251 256+0 records out 00:04:14.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209443 s, 50.1 MB/s 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:14.251 17:28:52 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:14.252 17:28:52 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:14.252 17:28:52 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:04:14.252 17:28:52 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:14.252 17:28:52 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.252 17:28:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.252 17:28:52 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:14.252 17:28:52 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:14.252 17:28:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.252 17:28:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:14.510 17:28:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:14.768 17:28:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:14.768 17:28:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:14.768 17:28:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:14.768 17:28:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:14.768 17:28:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:14.768 17:28:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:14.768 17:28:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:14.768 17:28:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:14.768 17:28:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:14.768 17:28:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.768 17:28:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:15.027 17:28:53 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:15.027 17:28:53 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:15.286 17:28:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:15.286 [2024-10-17 17:28:53.614608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:15.286 [2024-10-17 17:28:53.657767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.286 [2024-10-17 17:28:53.657768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.544 [2024-10-17 17:28:53.705604] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:15.544 [2024-10-17 17:28:53.705651] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:18.229 17:28:56 event.app_repeat -- event/event.sh@38 -- # waitforlisten 484588 /var/tmp/spdk-nbd.sock 00:04:18.229 17:28:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 484588 ']' 00:04:18.229 17:28:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:18.229 17:28:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:18.229 17:28:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:18.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:18.229 17:28:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:18.229 17:28:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:18.488 17:28:56 event.app_repeat -- event/event.sh@39 -- # killprocess 484588 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 484588 ']' 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 484588 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 484588 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 484588' 00:04:18.488 killing process with pid 484588 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@969 -- # kill 484588 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@974 -- # wait 484588 00:04:18.488 spdk_app_start is called in Round 0. 00:04:18.488 Shutdown signal received, stop current app iteration 00:04:18.488 Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 reinitialization... 00:04:18.488 spdk_app_start is called in Round 1. 00:04:18.488 Shutdown signal received, stop current app iteration 00:04:18.488 Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 reinitialization... 00:04:18.488 spdk_app_start is called in Round 2. 00:04:18.488 Shutdown signal received, stop current app iteration 00:04:18.488 Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 reinitialization... 00:04:18.488 spdk_app_start is called in Round 3. 00:04:18.488 Shutdown signal received, stop current app iteration 00:04:18.488 17:28:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:18.488 17:28:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:18.488 00:04:18.488 real 0m16.410s 00:04:18.488 user 0m35.513s 00:04:18.488 sys 0m3.040s 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.488 17:28:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:18.488 ************************************ 00:04:18.488 END TEST app_repeat 00:04:18.488 ************************************ 00:04:18.747 17:28:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:18.747 17:28:56 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:18.747 17:28:56 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.747 17:28:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.747 17:28:56 event -- common/autotest_common.sh@10 -- # set +x 00:04:18.747 ************************************ 00:04:18.747 START TEST cpu_locks 00:04:18.747 ************************************ 00:04:18.747 17:28:56 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:18.747 * Looking for test storage... 00:04:18.747 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.747 17:28:57 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:18.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.747 --rc genhtml_branch_coverage=1 00:04:18.747 --rc genhtml_function_coverage=1 00:04:18.747 --rc genhtml_legend=1 00:04:18.747 --rc geninfo_all_blocks=1 00:04:18.747 --rc geninfo_unexecuted_blocks=1 00:04:18.747 00:04:18.747 ' 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:18.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.747 --rc genhtml_branch_coverage=1 00:04:18.747 --rc genhtml_function_coverage=1 00:04:18.747 --rc genhtml_legend=1 00:04:18.747 --rc geninfo_all_blocks=1 00:04:18.747 --rc geninfo_unexecuted_blocks=1 00:04:18.747 00:04:18.747 ' 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:18.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.747 --rc genhtml_branch_coverage=1 00:04:18.747 --rc genhtml_function_coverage=1 00:04:18.747 --rc genhtml_legend=1 00:04:18.747 --rc geninfo_all_blocks=1 00:04:18.747 --rc geninfo_unexecuted_blocks=1 00:04:18.747 00:04:18.747 ' 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:18.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.747 --rc genhtml_branch_coverage=1 00:04:18.747 --rc genhtml_function_coverage=1 00:04:18.747 --rc genhtml_legend=1 00:04:18.747 --rc geninfo_all_blocks=1 00:04:18.747 --rc geninfo_unexecuted_blocks=1 00:04:18.747 00:04:18.747 ' 00:04:18.747 17:28:57 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:18.747 17:28:57 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:18.747 17:28:57 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:18.747 17:28:57 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.747 17:28:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:19.006 ************************************ 00:04:19.006 START TEST default_locks 00:04:19.006 ************************************ 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=487021 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 487021 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 487021 ']' 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:19.006 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:19.006 [2024-10-17 17:28:57.216821] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:19.006 [2024-10-17 17:28:57.216875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487021 ] 00:04:19.006 [2024-10-17 17:28:57.287172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.006 [2024-10-17 17:28:57.331130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.265 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:19.265 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:04:19.265 17:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 487021 00:04:19.265 17:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 487021 00:04:19.265 17:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:19.833 lslocks: write error 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 487021 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 487021 ']' 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 487021 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487021 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487021' 00:04:19.833 killing process with pid 487021 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 487021 00:04:19.833 17:28:57 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 487021 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 487021 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 487021 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 487021 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 487021 ']' 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.092 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.092 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (487021) - No such process 00:04:20.093 ERROR: process (pid: 487021) is no longer running 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:20.093 00:04:20.093 real 0m1.154s 00:04:20.093 user 0m1.072s 00:04:20.093 sys 0m0.558s 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.093 17:28:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.093 ************************************ 00:04:20.093 END TEST default_locks 00:04:20.093 ************************************ 00:04:20.093 17:28:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:20.093 17:28:58 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.093 17:28:58 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.093 17:28:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:20.093 ************************************ 00:04:20.093 START TEST default_locks_via_rpc 00:04:20.093 ************************************ 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=487230 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 487230 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 487230 ']' 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.093 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.093 [2024-10-17 17:28:58.455873] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:20.093 [2024-10-17 17:28:58.455931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487230 ] 00:04:20.352 [2024-10-17 17:28:58.528912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.352 [2024-10-17 17:28:58.574803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 487230 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 487230 00:04:20.610 17:28:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 487230 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 487230 ']' 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 487230 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487230 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487230' 00:04:21.176 killing process with pid 487230 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 487230 00:04:21.176 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 487230 00:04:21.435 00:04:21.435 real 0m1.390s 00:04:21.435 user 0m1.364s 00:04:21.435 sys 0m0.604s 00:04:21.435 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:21.435 17:28:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.435 ************************************ 00:04:21.435 END TEST default_locks_via_rpc 00:04:21.435 ************************************ 00:04:21.692 17:28:59 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:21.692 17:28:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:21.692 17:28:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:21.692 17:28:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:21.692 ************************************ 00:04:21.692 START TEST non_locking_app_on_locked_coremask 00:04:21.692 ************************************ 00:04:21.692 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:04:21.692 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=487436 00:04:21.692 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 487436 /var/tmp/spdk.sock 00:04:21.692 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:21.692 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 487436 ']' 00:04:21.692 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:21.693 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.693 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:21.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:21.693 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.693 17:28:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:21.693 [2024-10-17 17:28:59.927998] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:21.693 [2024-10-17 17:28:59.928060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487436 ] 00:04:21.693 [2024-10-17 17:29:00.001429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.693 [2024-10-17 17:29:00.052165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=487454 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 487454 /var/tmp/spdk2.sock 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 487454 ']' 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:21.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:21.950 17:29:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:21.950 [2024-10-17 17:29:00.337501] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:21.950 [2024-10-17 17:29:00.337564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487454 ] 00:04:22.208 [2024-10-17 17:29:00.440518] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:22.208 [2024-10-17 17:29:00.440555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.208 [2024-10-17 17:29:00.537678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.141 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.141 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:23.141 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 487436 00:04:23.141 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 487436 00:04:23.141 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:23.707 lslocks: write error 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 487436 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 487436 ']' 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 487436 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487436 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487436' 00:04:23.707 killing process with pid 487436 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 487436 00:04:23.707 17:29:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 487436 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 487454 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 487454 ']' 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 487454 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487454 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487454' 00:04:24.273 killing process with pid 487454 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 487454 00:04:24.273 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 487454 00:04:24.838 00:04:24.838 real 0m3.064s 00:04:24.838 user 0m3.210s 00:04:24.838 sys 0m1.084s 00:04:24.838 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:24.838 17:29:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:24.838 ************************************ 00:04:24.838 END TEST non_locking_app_on_locked_coremask 00:04:24.838 ************************************ 00:04:24.838 17:29:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:24.838 17:29:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:24.838 17:29:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:24.838 17:29:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:24.838 ************************************ 00:04:24.838 START TEST locking_app_on_unlocked_coremask 00:04:24.838 ************************************ 00:04:24.838 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:04:24.838 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=487850 00:04:24.838 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 487850 /var/tmp/spdk.sock 00:04:24.838 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:24.838 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 487850 ']' 00:04:24.838 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.838 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:24.839 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.839 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:24.839 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:24.839 [2024-10-17 17:29:03.073989] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:24.839 [2024-10-17 17:29:03.074054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487850 ] 00:04:24.839 [2024-10-17 17:29:03.148588] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:24.839 [2024-10-17 17:29:03.148622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.839 [2024-10-17 17:29:03.194749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=488021 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 488021 /var/tmp/spdk2.sock 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 488021 ']' 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:25.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:25.096 17:29:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.354 [2024-10-17 17:29:03.495262] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:25.354 [2024-10-17 17:29:03.495326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488021 ] 00:04:25.354 [2024-10-17 17:29:03.599584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.354 [2024-10-17 17:29:03.691276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.287 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:26.287 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:26.287 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 488021 00:04:26.287 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:26.287 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 488021 00:04:26.546 lslocks: write error 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 487850 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 487850 ']' 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 487850 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487850 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487850' 00:04:26.546 killing process with pid 487850 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 487850 00:04:26.546 17:29:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 487850 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 488021 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 488021 ']' 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 488021 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 488021 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 488021' 00:04:27.480 killing process with pid 488021 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 488021 00:04:27.480 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 488021 00:04:27.738 00:04:27.738 real 0m2.921s 00:04:27.738 user 0m3.056s 00:04:27.738 sys 0m1.039s 00:04:27.738 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:27.738 17:29:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:27.738 ************************************ 00:04:27.739 END TEST locking_app_on_unlocked_coremask 00:04:27.739 ************************************ 00:04:27.739 17:29:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:27.739 17:29:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:27.739 17:29:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:27.739 17:29:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:27.739 ************************************ 00:04:27.739 START TEST locking_app_on_locked_coremask 00:04:27.739 ************************************ 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=488353 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 488353 /var/tmp/spdk.sock 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 488353 ']' 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:27.739 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:27.739 [2024-10-17 17:29:06.077912] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:27.739 [2024-10-17 17:29:06.077970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488353 ] 00:04:27.997 [2024-10-17 17:29:06.148600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.997 [2024-10-17 17:29:06.191292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=488428 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 488428 /var/tmp/spdk2.sock 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 488428 /var/tmp/spdk2.sock 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 488428 /var/tmp/spdk2.sock 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 488428 ']' 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:28.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:28.255 17:29:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.255 [2024-10-17 17:29:06.451830] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:28.255 [2024-10-17 17:29:06.451887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488428 ] 00:04:28.255 [2024-10-17 17:29:06.546833] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 488353 has claimed it. 00:04:28.255 [2024-10-17 17:29:06.546881] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:28.821 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (488428) - No such process 00:04:28.821 ERROR: process (pid: 488428) is no longer running 00:04:28.821 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:28.821 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:28.821 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:28.821 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:28.821 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:28.821 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:28.822 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 488353 00:04:28.822 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:28.822 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 488353 00:04:29.387 lslocks: write error 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 488353 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 488353 ']' 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 488353 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 488353 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 488353' 00:04:29.387 killing process with pid 488353 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 488353 00:04:29.387 17:29:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 488353 00:04:29.954 00:04:29.954 real 0m2.026s 00:04:29.954 user 0m2.118s 00:04:29.954 sys 0m0.739s 00:04:29.954 17:29:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.954 17:29:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.954 ************************************ 00:04:29.954 END TEST locking_app_on_locked_coremask 00:04:29.954 ************************************ 00:04:29.954 17:29:08 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:29.954 17:29:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.954 17:29:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.954 17:29:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.954 ************************************ 00:04:29.954 START TEST locking_overlapped_coremask 00:04:29.954 ************************************ 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=488639 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 488639 /var/tmp/spdk.sock 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 488639 ']' 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:29.954 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:29.954 [2024-10-17 17:29:08.185498] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:29.954 [2024-10-17 17:29:08.185556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488639 ] 00:04:29.954 [2024-10-17 17:29:08.256133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:29.954 [2024-10-17 17:29:08.305030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.954 [2024-10-17 17:29:08.305120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:29.954 [2024-10-17 17:29:08.305122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=488661 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 488661 /var/tmp/spdk2.sock 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 488661 /var/tmp/spdk2.sock 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 488661 /var/tmp/spdk2.sock 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 488661 ']' 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:30.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:30.212 17:29:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.212 [2024-10-17 17:29:08.585940] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:30.212 [2024-10-17 17:29:08.585997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488661 ] 00:04:30.470 [2024-10-17 17:29:08.688327] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 488639 has claimed it. 00:04:30.470 [2024-10-17 17:29:08.688374] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:31.037 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (488661) - No such process 00:04:31.037 ERROR: process (pid: 488661) is no longer running 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 488639 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 488639 ']' 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 488639 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 488639 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 488639' 00:04:31.037 killing process with pid 488639 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 488639 00:04:31.037 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 488639 00:04:31.296 00:04:31.296 real 0m1.540s 00:04:31.296 user 0m4.217s 00:04:31.296 sys 0m0.479s 00:04:31.296 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.296 17:29:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.296 ************************************ 00:04:31.296 END TEST locking_overlapped_coremask 00:04:31.296 ************************************ 00:04:31.554 17:29:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:31.554 17:29:09 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.554 17:29:09 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.554 17:29:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.554 ************************************ 00:04:31.554 START TEST locking_overlapped_coremask_via_rpc 00:04:31.554 ************************************ 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=488865 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 488865 /var/tmp/spdk.sock 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 488865 ']' 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.554 17:29:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.554 [2024-10-17 17:29:09.813881] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:31.554 [2024-10-17 17:29:09.813940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488865 ] 00:04:31.554 [2024-10-17 17:29:09.886731] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:31.554 [2024-10-17 17:29:09.886764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:31.554 [2024-10-17 17:29:09.934953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.554 [2024-10-17 17:29:09.935044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:31.554 [2024-10-17 17:29:09.935046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=489001 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 489001 /var/tmp/spdk2.sock 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 489001 ']' 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:31.812 17:29:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.070 [2024-10-17 17:29:10.226646] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:32.070 [2024-10-17 17:29:10.226715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489001 ] 00:04:32.070 [2024-10-17 17:29:10.330615] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:32.070 [2024-10-17 17:29:10.330646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:32.070 [2024-10-17 17:29:10.419805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:32.070 [2024-10-17 17:29:10.419913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.070 [2024-10-17 17:29:10.419915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:33.004 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.004 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:33.004 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:33.004 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.004 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.004 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:33.004 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:33.004 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.005 [2024-10-17 17:29:11.090512] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 488865 has claimed it. 00:04:33.005 request: 00:04:33.005 { 00:04:33.005 "method": "framework_enable_cpumask_locks", 00:04:33.005 "req_id": 1 00:04:33.005 } 00:04:33.005 Got JSON-RPC error response 00:04:33.005 response: 00:04:33.005 { 00:04:33.005 "code": -32603, 00:04:33.005 "message": "Failed to claim CPU core: 2" 00:04:33.005 } 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 488865 /var/tmp/spdk.sock 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 488865 ']' 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 489001 /var/tmp/spdk2.sock 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 489001 ']' 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:33.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:33.005 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.263 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:33.263 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:33.263 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:33.263 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:33.263 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:33.263 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:33.263 00:04:33.263 real 0m1.758s 00:04:33.263 user 0m0.822s 00:04:33.263 sys 0m0.180s 00:04:33.263 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.263 17:29:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.263 ************************************ 00:04:33.263 END TEST locking_overlapped_coremask_via_rpc 00:04:33.263 ************************************ 00:04:33.263 17:29:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:33.263 17:29:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 488865 ]] 00:04:33.263 17:29:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 488865 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 488865 ']' 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 488865 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 488865 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 488865' 00:04:33.263 killing process with pid 488865 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 488865 00:04:33.263 17:29:11 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 488865 00:04:33.829 17:29:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 489001 ]] 00:04:33.829 17:29:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 489001 00:04:33.829 17:29:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 489001 ']' 00:04:33.829 17:29:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 489001 00:04:33.829 17:29:11 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:04:33.829 17:29:11 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:33.829 17:29:11 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 489001 00:04:33.829 17:29:12 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:33.829 17:29:12 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:33.829 17:29:12 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 489001' 00:04:33.829 killing process with pid 489001 00:04:33.829 17:29:12 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 489001 00:04:33.829 17:29:12 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 489001 00:04:34.088 17:29:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:34.088 17:29:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:34.088 17:29:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 488865 ]] 00:04:34.088 17:29:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 488865 00:04:34.088 17:29:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 488865 ']' 00:04:34.088 17:29:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 488865 00:04:34.088 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (488865) - No such process 00:04:34.088 17:29:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 488865 is not found' 00:04:34.088 Process with pid 488865 is not found 00:04:34.088 17:29:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 489001 ]] 00:04:34.088 17:29:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 489001 00:04:34.088 17:29:12 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 489001 ']' 00:04:34.088 17:29:12 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 489001 00:04:34.088 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (489001) - No such process 00:04:34.088 17:29:12 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 489001 is not found' 00:04:34.088 Process with pid 489001 is not found 00:04:34.088 17:29:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:34.088 00:04:34.088 real 0m15.440s 00:04:34.088 user 0m26.086s 00:04:34.088 sys 0m5.808s 00:04:34.088 17:29:12 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.088 17:29:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.088 ************************************ 00:04:34.088 END TEST cpu_locks 00:04:34.088 ************************************ 00:04:34.088 00:04:34.088 real 0m40.287s 00:04:34.088 user 1m14.874s 00:04:34.088 sys 0m9.997s 00:04:34.088 17:29:12 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:34.088 17:29:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.088 ************************************ 00:04:34.088 END TEST event 00:04:34.088 ************************************ 00:04:34.088 17:29:12 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:04:34.088 17:29:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:34.088 17:29:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.088 17:29:12 -- common/autotest_common.sh@10 -- # set +x 00:04:34.346 ************************************ 00:04:34.346 START TEST thread 00:04:34.346 ************************************ 00:04:34.346 17:29:12 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:04:34.346 * Looking for test storage... 00:04:34.346 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:04:34.346 17:29:12 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:34.346 17:29:12 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:04:34.346 17:29:12 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:34.346 17:29:12 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:34.346 17:29:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.346 17:29:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.346 17:29:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.346 17:29:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.346 17:29:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.346 17:29:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.346 17:29:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.346 17:29:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.346 17:29:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.346 17:29:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.346 17:29:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.346 17:29:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:34.346 17:29:12 thread -- scripts/common.sh@345 -- # : 1 00:04:34.346 17:29:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.346 17:29:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.346 17:29:12 thread -- scripts/common.sh@365 -- # decimal 1 00:04:34.346 17:29:12 thread -- scripts/common.sh@353 -- # local d=1 00:04:34.346 17:29:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.346 17:29:12 thread -- scripts/common.sh@355 -- # echo 1 00:04:34.346 17:29:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.346 17:29:12 thread -- scripts/common.sh@366 -- # decimal 2 00:04:34.346 17:29:12 thread -- scripts/common.sh@353 -- # local d=2 00:04:34.346 17:29:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.346 17:29:12 thread -- scripts/common.sh@355 -- # echo 2 00:04:34.346 17:29:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.346 17:29:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.346 17:29:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.346 17:29:12 thread -- scripts/common.sh@368 -- # return 0 00:04:34.346 17:29:12 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.346 17:29:12 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.346 --rc genhtml_branch_coverage=1 00:04:34.346 --rc genhtml_function_coverage=1 00:04:34.346 --rc genhtml_legend=1 00:04:34.346 --rc geninfo_all_blocks=1 00:04:34.346 --rc geninfo_unexecuted_blocks=1 00:04:34.346 00:04:34.346 ' 00:04:34.346 17:29:12 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:34.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.346 --rc genhtml_branch_coverage=1 00:04:34.346 --rc genhtml_function_coverage=1 00:04:34.346 --rc genhtml_legend=1 00:04:34.346 --rc geninfo_all_blocks=1 00:04:34.346 --rc geninfo_unexecuted_blocks=1 00:04:34.346 00:04:34.346 ' 00:04:34.346 17:29:12 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:34.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.347 --rc genhtml_branch_coverage=1 00:04:34.347 --rc genhtml_function_coverage=1 00:04:34.347 --rc genhtml_legend=1 00:04:34.347 --rc geninfo_all_blocks=1 00:04:34.347 --rc geninfo_unexecuted_blocks=1 00:04:34.347 00:04:34.347 ' 00:04:34.347 17:29:12 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:34.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.347 --rc genhtml_branch_coverage=1 00:04:34.347 --rc genhtml_function_coverage=1 00:04:34.347 --rc genhtml_legend=1 00:04:34.347 --rc geninfo_all_blocks=1 00:04:34.347 --rc geninfo_unexecuted_blocks=1 00:04:34.347 00:04:34.347 ' 00:04:34.347 17:29:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:34.347 17:29:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:34.347 17:29:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:34.347 17:29:12 thread -- common/autotest_common.sh@10 -- # set +x 00:04:34.347 ************************************ 00:04:34.347 START TEST thread_poller_perf 00:04:34.347 ************************************ 00:04:34.347 17:29:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:34.604 [2024-10-17 17:29:12.747964] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:34.604 [2024-10-17 17:29:12.748031] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489391 ] 00:04:34.604 [2024-10-17 17:29:12.824163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.604 [2024-10-17 17:29:12.869093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.604 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:35.538 [2024-10-17T15:29:13.929Z] ====================================== 00:04:35.538 [2024-10-17T15:29:13.929Z] busy:2306949328 (cyc) 00:04:35.538 [2024-10-17T15:29:13.929Z] total_run_count: 424000 00:04:35.538 [2024-10-17T15:29:13.929Z] tsc_hz: 2300000000 (cyc) 00:04:35.538 [2024-10-17T15:29:13.929Z] ====================================== 00:04:35.538 [2024-10-17T15:29:13.929Z] poller_cost: 5440 (cyc), 2365 (nsec) 00:04:35.538 00:04:35.538 real 0m1.192s 00:04:35.538 user 0m1.106s 00:04:35.538 sys 0m0.082s 00:04:35.538 17:29:13 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.538 17:29:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:35.538 ************************************ 00:04:35.538 END TEST thread_poller_perf 00:04:35.538 ************************************ 00:04:35.796 17:29:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:35.797 17:29:13 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:04:35.797 17:29:13 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.797 17:29:13 thread -- common/autotest_common.sh@10 -- # set +x 00:04:35.797 ************************************ 00:04:35.797 START TEST thread_poller_perf 00:04:35.797 ************************************ 00:04:35.797 17:29:13 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:35.797 [2024-10-17 17:29:14.014766] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:35.797 [2024-10-17 17:29:14.014832] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489548 ] 00:04:35.797 [2024-10-17 17:29:14.088978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.797 [2024-10-17 17:29:14.133495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.797 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:37.168 [2024-10-17T15:29:15.559Z] ====================================== 00:04:37.168 [2024-10-17T15:29:15.559Z] busy:2301561118 (cyc) 00:04:37.168 [2024-10-17T15:29:15.559Z] total_run_count: 5523000 00:04:37.168 [2024-10-17T15:29:15.559Z] tsc_hz: 2300000000 (cyc) 00:04:37.168 [2024-10-17T15:29:15.559Z] ====================================== 00:04:37.168 [2024-10-17T15:29:15.559Z] poller_cost: 416 (cyc), 180 (nsec) 00:04:37.168 00:04:37.168 real 0m1.185s 00:04:37.168 user 0m1.093s 00:04:37.168 sys 0m0.088s 00:04:37.168 17:29:15 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.168 17:29:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.168 ************************************ 00:04:37.168 END TEST thread_poller_perf 00:04:37.168 ************************************ 00:04:37.168 17:29:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:37.168 00:04:37.168 real 0m2.725s 00:04:37.168 user 0m2.358s 00:04:37.168 sys 0m0.386s 00:04:37.168 17:29:15 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.168 17:29:15 thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.168 ************************************ 00:04:37.168 END TEST thread 00:04:37.168 ************************************ 00:04:37.168 17:29:15 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:37.168 17:29:15 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:04:37.168 17:29:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.168 17:29:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.168 17:29:15 -- common/autotest_common.sh@10 -- # set +x 00:04:37.168 ************************************ 00:04:37.168 START TEST app_cmdline 00:04:37.168 ************************************ 00:04:37.168 17:29:15 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:04:37.168 * Looking for test storage... 00:04:37.168 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:04:37.168 17:29:15 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.168 17:29:15 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.168 17:29:15 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:37.168 17:29:15 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:37.168 17:29:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.169 17:29:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.169 --rc genhtml_branch_coverage=1 00:04:37.169 --rc genhtml_function_coverage=1 00:04:37.169 --rc genhtml_legend=1 00:04:37.169 --rc geninfo_all_blocks=1 00:04:37.169 --rc geninfo_unexecuted_blocks=1 00:04:37.169 00:04:37.169 ' 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.169 --rc genhtml_branch_coverage=1 00:04:37.169 --rc genhtml_function_coverage=1 00:04:37.169 --rc genhtml_legend=1 00:04:37.169 --rc geninfo_all_blocks=1 00:04:37.169 --rc geninfo_unexecuted_blocks=1 00:04:37.169 00:04:37.169 ' 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.169 --rc genhtml_branch_coverage=1 00:04:37.169 --rc genhtml_function_coverage=1 00:04:37.169 --rc genhtml_legend=1 00:04:37.169 --rc geninfo_all_blocks=1 00:04:37.169 --rc geninfo_unexecuted_blocks=1 00:04:37.169 00:04:37.169 ' 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:37.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.169 --rc genhtml_branch_coverage=1 00:04:37.169 --rc genhtml_function_coverage=1 00:04:37.169 --rc genhtml_legend=1 00:04:37.169 --rc geninfo_all_blocks=1 00:04:37.169 --rc geninfo_unexecuted_blocks=1 00:04:37.169 00:04:37.169 ' 00:04:37.169 17:29:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:37.169 17:29:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=489862 00:04:37.169 17:29:15 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:37.169 17:29:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 489862 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 489862 ']' 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:37.169 17:29:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:37.169 [2024-10-17 17:29:15.526203] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:37.169 [2024-10-17 17:29:15.526271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid489862 ] 00:04:37.426 [2024-10-17 17:29:15.599301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.426 [2024-10-17 17:29:15.645275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.683 17:29:15 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:37.683 17:29:15 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:04:37.683 17:29:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:37.683 { 00:04:37.683 "version": "SPDK v25.01-pre git sha1 264c0dc1a", 00:04:37.683 "fields": { 00:04:37.683 "major": 25, 00:04:37.683 "minor": 1, 00:04:37.683 "patch": 0, 00:04:37.683 "suffix": "-pre", 00:04:37.683 "commit": "264c0dc1a" 00:04:37.683 } 00:04:37.683 } 00:04:37.683 17:29:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:37.683 17:29:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:37.683 17:29:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:37.683 17:29:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:37.683 17:29:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:37.683 17:29:16 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.683 17:29:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:37.683 17:29:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:37.683 17:29:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:37.683 17:29:16 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.940 17:29:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:37.940 17:29:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:37.940 17:29:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:37.940 17:29:16 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:04:37.940 17:29:16 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:37.940 17:29:16 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:04:37.940 17:29:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.940 17:29:16 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:04:37.940 17:29:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.940 17:29:16 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:04:37.940 17:29:16 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.940 17:29:16 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:37.941 request: 00:04:37.941 { 00:04:37.941 "method": "env_dpdk_get_mem_stats", 00:04:37.941 "req_id": 1 00:04:37.941 } 00:04:37.941 Got JSON-RPC error response 00:04:37.941 response: 00:04:37.941 { 00:04:37.941 "code": -32601, 00:04:37.941 "message": "Method not found" 00:04:37.941 } 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:37.941 17:29:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 489862 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 489862 ']' 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 489862 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.941 17:29:16 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 489862 00:04:38.198 17:29:16 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:38.198 17:29:16 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:38.198 17:29:16 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 489862' 00:04:38.198 killing process with pid 489862 00:04:38.198 17:29:16 app_cmdline -- common/autotest_common.sh@969 -- # kill 489862 00:04:38.198 17:29:16 app_cmdline -- common/autotest_common.sh@974 -- # wait 489862 00:04:38.455 00:04:38.455 real 0m1.366s 00:04:38.455 user 0m1.543s 00:04:38.455 sys 0m0.495s 00:04:38.455 17:29:16 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.455 17:29:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:38.455 ************************************ 00:04:38.455 END TEST app_cmdline 00:04:38.455 ************************************ 00:04:38.455 17:29:16 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:04:38.455 17:29:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.455 17:29:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.455 17:29:16 -- common/autotest_common.sh@10 -- # set +x 00:04:38.455 ************************************ 00:04:38.455 START TEST version 00:04:38.455 ************************************ 00:04:38.455 17:29:16 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:04:38.455 * Looking for test storage... 00:04:38.714 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:04:38.714 17:29:16 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:38.714 17:29:16 version -- common/autotest_common.sh@1691 -- # lcov --version 00:04:38.714 17:29:16 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:38.714 17:29:16 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:38.714 17:29:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.714 17:29:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.714 17:29:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.714 17:29:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.714 17:29:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.714 17:29:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.714 17:29:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.714 17:29:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.714 17:29:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.714 17:29:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.714 17:29:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.714 17:29:16 version -- scripts/common.sh@344 -- # case "$op" in 00:04:38.714 17:29:16 version -- scripts/common.sh@345 -- # : 1 00:04:38.714 17:29:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.714 17:29:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.714 17:29:16 version -- scripts/common.sh@365 -- # decimal 1 00:04:38.714 17:29:16 version -- scripts/common.sh@353 -- # local d=1 00:04:38.714 17:29:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.714 17:29:16 version -- scripts/common.sh@355 -- # echo 1 00:04:38.714 17:29:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.714 17:29:16 version -- scripts/common.sh@366 -- # decimal 2 00:04:38.714 17:29:16 version -- scripts/common.sh@353 -- # local d=2 00:04:38.714 17:29:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.714 17:29:16 version -- scripts/common.sh@355 -- # echo 2 00:04:38.714 17:29:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.714 17:29:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.714 17:29:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.714 17:29:16 version -- scripts/common.sh@368 -- # return 0 00:04:38.714 17:29:16 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.714 17:29:16 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:38.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.714 --rc genhtml_branch_coverage=1 00:04:38.714 --rc genhtml_function_coverage=1 00:04:38.714 --rc genhtml_legend=1 00:04:38.714 --rc geninfo_all_blocks=1 00:04:38.714 --rc geninfo_unexecuted_blocks=1 00:04:38.714 00:04:38.714 ' 00:04:38.714 17:29:16 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:38.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.714 --rc genhtml_branch_coverage=1 00:04:38.714 --rc genhtml_function_coverage=1 00:04:38.714 --rc genhtml_legend=1 00:04:38.714 --rc geninfo_all_blocks=1 00:04:38.714 --rc geninfo_unexecuted_blocks=1 00:04:38.714 00:04:38.714 ' 00:04:38.714 17:29:16 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:38.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.714 --rc genhtml_branch_coverage=1 00:04:38.714 --rc genhtml_function_coverage=1 00:04:38.714 --rc genhtml_legend=1 00:04:38.714 --rc geninfo_all_blocks=1 00:04:38.714 --rc geninfo_unexecuted_blocks=1 00:04:38.714 00:04:38.714 ' 00:04:38.714 17:29:16 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:38.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.714 --rc genhtml_branch_coverage=1 00:04:38.714 --rc genhtml_function_coverage=1 00:04:38.714 --rc genhtml_legend=1 00:04:38.714 --rc geninfo_all_blocks=1 00:04:38.714 --rc geninfo_unexecuted_blocks=1 00:04:38.714 00:04:38.714 ' 00:04:38.714 17:29:16 version -- app/version.sh@17 -- # get_header_version major 00:04:38.714 17:29:16 version -- app/version.sh@14 -- # tr -d '"' 00:04:38.714 17:29:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:04:38.714 17:29:16 version -- app/version.sh@14 -- # cut -f2 00:04:38.714 17:29:16 version -- app/version.sh@17 -- # major=25 00:04:38.714 17:29:16 version -- app/version.sh@18 -- # get_header_version minor 00:04:38.714 17:29:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:04:38.714 17:29:16 version -- app/version.sh@14 -- # cut -f2 00:04:38.714 17:29:16 version -- app/version.sh@14 -- # tr -d '"' 00:04:38.714 17:29:16 version -- app/version.sh@18 -- # minor=1 00:04:38.714 17:29:16 version -- app/version.sh@19 -- # get_header_version patch 00:04:38.714 17:29:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:04:38.714 17:29:16 version -- app/version.sh@14 -- # cut -f2 00:04:38.714 17:29:16 version -- app/version.sh@14 -- # tr -d '"' 00:04:38.714 17:29:16 version -- app/version.sh@19 -- # patch=0 00:04:38.714 17:29:16 version -- app/version.sh@20 -- # get_header_version suffix 00:04:38.714 17:29:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:04:38.714 17:29:16 version -- app/version.sh@14 -- # cut -f2 00:04:38.714 17:29:16 version -- app/version.sh@14 -- # tr -d '"' 00:04:38.714 17:29:16 version -- app/version.sh@20 -- # suffix=-pre 00:04:38.714 17:29:16 version -- app/version.sh@22 -- # version=25.1 00:04:38.714 17:29:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:38.714 17:29:16 version -- app/version.sh@28 -- # version=25.1rc0 00:04:38.714 17:29:16 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:04:38.714 17:29:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:38.714 17:29:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:38.714 17:29:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:38.714 00:04:38.714 real 0m0.279s 00:04:38.714 user 0m0.150s 00:04:38.714 sys 0m0.182s 00:04:38.714 17:29:17 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.714 17:29:17 version -- common/autotest_common.sh@10 -- # set +x 00:04:38.714 ************************************ 00:04:38.714 END TEST version 00:04:38.714 ************************************ 00:04:38.714 17:29:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:38.714 17:29:17 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:38.714 17:29:17 -- spdk/autotest.sh@194 -- # uname -s 00:04:38.714 17:29:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:38.714 17:29:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:38.714 17:29:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:38.714 17:29:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:38.714 17:29:17 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:04:38.714 17:29:17 -- spdk/autotest.sh@256 -- # timing_exit lib 00:04:38.714 17:29:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:38.714 17:29:17 -- common/autotest_common.sh@10 -- # set +x 00:04:38.973 17:29:17 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:04:38.973 17:29:17 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:04:38.973 17:29:17 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:04:38.973 17:29:17 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:04:38.973 17:29:17 -- spdk/autotest.sh@276 -- # '[' rdma = rdma ']' 00:04:38.973 17:29:17 -- spdk/autotest.sh@277 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:04:38.973 17:29:17 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:38.973 17:29:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.973 17:29:17 -- common/autotest_common.sh@10 -- # set +x 00:04:38.973 ************************************ 00:04:38.973 START TEST nvmf_rdma 00:04:38.973 ************************************ 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:04:38.973 * Looking for test storage... 00:04:38.973 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.973 17:29:17 nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.973 --rc genhtml_branch_coverage=1 00:04:38.973 --rc genhtml_function_coverage=1 00:04:38.973 --rc genhtml_legend=1 00:04:38.973 --rc geninfo_all_blocks=1 00:04:38.973 --rc geninfo_unexecuted_blocks=1 00:04:38.973 00:04:38.973 ' 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.973 --rc genhtml_branch_coverage=1 00:04:38.973 --rc genhtml_function_coverage=1 00:04:38.973 --rc genhtml_legend=1 00:04:38.973 --rc geninfo_all_blocks=1 00:04:38.973 --rc geninfo_unexecuted_blocks=1 00:04:38.973 00:04:38.973 ' 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.973 --rc genhtml_branch_coverage=1 00:04:38.973 --rc genhtml_function_coverage=1 00:04:38.973 --rc genhtml_legend=1 00:04:38.973 --rc geninfo_all_blocks=1 00:04:38.973 --rc geninfo_unexecuted_blocks=1 00:04:38.973 00:04:38.973 ' 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:38.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.973 --rc genhtml_branch_coverage=1 00:04:38.973 --rc genhtml_function_coverage=1 00:04:38.973 --rc genhtml_legend=1 00:04:38.973 --rc geninfo_all_blocks=1 00:04:38.973 --rc geninfo_unexecuted_blocks=1 00:04:38.973 00:04:38.973 ' 00:04:38.973 17:29:17 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:04:38.973 17:29:17 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:38.973 17:29:17 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.973 17:29:17 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:04:39.231 ************************************ 00:04:39.231 START TEST nvmf_target_core 00:04:39.231 ************************************ 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:04:39.231 * Looking for test storage... 00:04:39.231 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:39.231 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.232 --rc genhtml_branch_coverage=1 00:04:39.232 --rc genhtml_function_coverage=1 00:04:39.232 --rc genhtml_legend=1 00:04:39.232 --rc geninfo_all_blocks=1 00:04:39.232 --rc geninfo_unexecuted_blocks=1 00:04:39.232 00:04:39.232 ' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.232 --rc genhtml_branch_coverage=1 00:04:39.232 --rc genhtml_function_coverage=1 00:04:39.232 --rc genhtml_legend=1 00:04:39.232 --rc geninfo_all_blocks=1 00:04:39.232 --rc geninfo_unexecuted_blocks=1 00:04:39.232 00:04:39.232 ' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.232 --rc genhtml_branch_coverage=1 00:04:39.232 --rc genhtml_function_coverage=1 00:04:39.232 --rc genhtml_legend=1 00:04:39.232 --rc geninfo_all_blocks=1 00:04:39.232 --rc geninfo_unexecuted_blocks=1 00:04:39.232 00:04:39.232 ' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.232 --rc genhtml_branch_coverage=1 00:04:39.232 --rc genhtml_function_coverage=1 00:04:39.232 --rc genhtml_legend=1 00:04:39.232 --rc geninfo_all_blocks=1 00:04:39.232 --rc geninfo_unexecuted_blocks=1 00:04:39.232 00:04:39.232 ' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.232 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.232 17:29:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:39.492 ************************************ 00:04:39.492 START TEST nvmf_abort 00:04:39.492 ************************************ 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:04:39.492 * Looking for test storage... 00:04:39.492 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:39.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.492 --rc genhtml_branch_coverage=1 00:04:39.492 --rc genhtml_function_coverage=1 00:04:39.492 --rc genhtml_legend=1 00:04:39.492 --rc geninfo_all_blocks=1 00:04:39.492 --rc geninfo_unexecuted_blocks=1 00:04:39.492 00:04:39.492 ' 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:39.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.492 --rc genhtml_branch_coverage=1 00:04:39.492 --rc genhtml_function_coverage=1 00:04:39.492 --rc genhtml_legend=1 00:04:39.492 --rc geninfo_all_blocks=1 00:04:39.492 --rc geninfo_unexecuted_blocks=1 00:04:39.492 00:04:39.492 ' 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:39.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.492 --rc genhtml_branch_coverage=1 00:04:39.492 --rc genhtml_function_coverage=1 00:04:39.492 --rc genhtml_legend=1 00:04:39.492 --rc geninfo_all_blocks=1 00:04:39.492 --rc geninfo_unexecuted_blocks=1 00:04:39.492 00:04:39.492 ' 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:39.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.492 --rc genhtml_branch_coverage=1 00:04:39.492 --rc genhtml_function_coverage=1 00:04:39.492 --rc genhtml_legend=1 00:04:39.492 --rc geninfo_all_blocks=1 00:04:39.492 --rc geninfo_unexecuted_blocks=1 00:04:39.492 00:04:39.492 ' 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.492 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.493 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:04:39.493 17:29:17 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:04:46.056 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:04:46.056 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:04:46.056 Found net devices under 0000:18:00.0: mlx_0_0 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:04:46.056 Found net devices under 0000:18:00.1: mlx_0_1 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # rdma_device_init 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # uname 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@528 -- # allocate_nic_ips 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:04:46.056 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:04:46.057 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:46.057 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:04:46.057 altname enp24s0f0np0 00:04:46.057 altname ens785f0np0 00:04:46.057 inet 192.168.100.8/24 scope global mlx_0_0 00:04:46.057 valid_lft forever preferred_lft forever 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:04:46.057 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:46.057 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:04:46.057 altname enp24s0f1np1 00:04:46.057 altname ens785f1np1 00:04:46.057 inet 192.168.100.9/24 scope global mlx_0_1 00:04:46.057 valid_lft forever preferred_lft forever 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@109 -- # continue 2 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:04:46.057 192.168.100.9' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:04:46.057 192.168.100.9' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # head -n 1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # head -n 1 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:04:46.057 192.168.100.9' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # tail -n +2 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=493232 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 493232 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 493232 ']' 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.057 17:29:23 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:46.057 [2024-10-17 17:29:23.951983] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:46.057 [2024-10-17 17:29:23.952047] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:46.057 [2024-10-17 17:29:24.024937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.057 [2024-10-17 17:29:24.073061] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:46.057 [2024-10-17 17:29:24.073107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:46.057 [2024-10-17 17:29:24.073117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:46.057 [2024-10-17 17:29:24.073125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:46.057 [2024-10-17 17:29:24.073133] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:46.057 [2024-10-17 17:29:24.074356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.057 [2024-10-17 17:29:24.074449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.057 [2024-10-17 17:29:24.074451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.057 [2024-10-17 17:29:24.256011] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1931ab0/0x1935fa0) succeed. 00:04:46.057 [2024-10-17 17:29:24.268343] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19330a0/0x1977640) succeed. 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.057 Malloc0 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:46.057 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.058 Delay0 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.058 [2024-10-17 17:29:24.435767] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.058 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:46.317 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.317 17:29:24 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:46.317 [2024-10-17 17:29:24.540349] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:48.843 Initializing NVMe Controllers 00:04:48.843 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:04:48.843 controller IO queue size 128 less than required 00:04:48.843 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:48.843 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:48.843 Initialization complete. Launching workers. 00:04:48.843 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41774 00:04:48.843 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41835, failed to submit 62 00:04:48.843 success 41775, unsuccessful 60, failed 0 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:04:48.843 rmmod nvme_rdma 00:04:48.843 rmmod nvme_fabrics 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 493232 ']' 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 493232 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 493232 ']' 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 493232 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 493232 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 493232' 00:04:48.843 killing process with pid 493232 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 493232 00:04:48.843 17:29:26 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 493232 00:04:48.843 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:04:48.843 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:04:48.843 00:04:48.843 real 0m9.394s 00:04:48.843 user 0m12.553s 00:04:48.843 sys 0m5.148s 00:04:48.843 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.843 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.843 ************************************ 00:04:48.843 END TEST nvmf_abort 00:04:48.843 ************************************ 00:04:48.843 17:29:27 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:04:48.843 17:29:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:04:48.843 17:29:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.844 17:29:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:48.844 ************************************ 00:04:48.844 START TEST nvmf_ns_hotplug_stress 00:04:48.844 ************************************ 00:04:48.844 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:04:48.844 * Looking for test storage... 00:04:48.844 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:04:48.844 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.844 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.844 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.104 --rc genhtml_branch_coverage=1 00:04:49.104 --rc genhtml_function_coverage=1 00:04:49.104 --rc genhtml_legend=1 00:04:49.104 --rc geninfo_all_blocks=1 00:04:49.104 --rc geninfo_unexecuted_blocks=1 00:04:49.104 00:04:49.104 ' 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.104 --rc genhtml_branch_coverage=1 00:04:49.104 --rc genhtml_function_coverage=1 00:04:49.104 --rc genhtml_legend=1 00:04:49.104 --rc geninfo_all_blocks=1 00:04:49.104 --rc geninfo_unexecuted_blocks=1 00:04:49.104 00:04:49.104 ' 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.104 --rc genhtml_branch_coverage=1 00:04:49.104 --rc genhtml_function_coverage=1 00:04:49.104 --rc genhtml_legend=1 00:04:49.104 --rc geninfo_all_blocks=1 00:04:49.104 --rc geninfo_unexecuted_blocks=1 00:04:49.104 00:04:49.104 ' 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:49.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.104 --rc genhtml_branch_coverage=1 00:04:49.104 --rc genhtml_function_coverage=1 00:04:49.104 --rc genhtml_legend=1 00:04:49.104 --rc geninfo_all_blocks=1 00:04:49.104 --rc geninfo_unexecuted_blocks=1 00:04:49.104 00:04:49.104 ' 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.104 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.105 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:04:49.105 17:29:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:04:55.673 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:04:55.673 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:04:55.674 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:04:55.674 Found net devices under 0000:18:00.0: mlx_0_0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:04:55.674 Found net devices under 0000:18:00.1: mlx_0_1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # rdma_device_init 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # uname 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@528 -- # allocate_nic_ips 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:04:55.674 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:55.674 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:04:55.674 altname enp24s0f0np0 00:04:55.674 altname ens785f0np0 00:04:55.674 inet 192.168.100.8/24 scope global mlx_0_0 00:04:55.674 valid_lft forever preferred_lft forever 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:04:55.674 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:04:55.674 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:04:55.674 altname enp24s0f1np1 00:04:55.674 altname ens785f1np1 00:04:55.674 inet 192.168.100.9/24 scope global mlx_0_1 00:04:55.674 valid_lft forever preferred_lft forever 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@109 -- # continue 2 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:04:55.674 192.168.100.9' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:04:55.674 192.168.100.9' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # head -n 1 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:04:55.674 192.168.100.9' 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # tail -n +2 00:04:55.674 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # head -n 1 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=496558 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 496558 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 496558 ']' 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:55.675 [2024-10-17 17:29:33.682435] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:04:55.675 [2024-10-17 17:29:33.682497] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:55.675 [2024-10-17 17:29:33.753761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:55.675 [2024-10-17 17:29:33.797014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:55.675 [2024-10-17 17:29:33.797056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:55.675 [2024-10-17 17:29:33.797065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:55.675 [2024-10-17 17:29:33.797073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:55.675 [2024-10-17 17:29:33.797081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:55.675 [2024-10-17 17:29:33.798364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:55.675 [2024-10-17 17:29:33.798438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.675 [2024-10-17 17:29:33.798440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:04:55.675 17:29:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:04:55.932 [2024-10-17 17:29:34.141912] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11c1ab0/0x11c5fa0) succeed. 00:04:55.932 [2024-10-17 17:29:34.152085] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11c30a0/0x1207640) succeed. 00:04:55.932 17:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:04:56.189 17:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:04:56.446 [2024-10-17 17:29:34.656570] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:04:56.446 17:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:04:56.703 17:29:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:04:56.959 Malloc0 00:04:56.959 17:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:56.959 Delay0 00:04:56.959 17:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:57.216 17:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:04:57.473 NULL1 00:04:57.473 17:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:04:57.730 17:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=496826 00:04:57.730 17:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:04:57.730 17:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:04:57.730 17:29:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:59.097 Read completed with error (sct=0, sc=11) 00:04:59.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.097 17:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:04:59.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:04:59.097 17:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:04:59.097 17:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:04:59.354 true 00:04:59.354 17:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:04:59.354 17:29:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:04:59.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.174 17:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:00.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:00.174 17:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:00.174 17:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:00.430 true 00:05:00.430 17:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:00.430 17:29:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:01.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:01.360 17:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:01.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:01.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:01.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:01.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:01.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:01.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:01.360 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:01.360 17:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:01.360 17:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:01.631 true 00:05:01.631 17:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:01.631 17:29:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:02.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.641 17:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:02.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:02.641 17:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:02.641 17:29:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:02.907 true 00:05:02.907 17:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:02.907 17:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.840 17:29:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.840 17:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:03.840 17:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:04.098 true 00:05:04.098 17:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:04.098 17:29:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.030 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:05.030 17:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.030 17:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:05.030 17:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:05.288 true 00:05:05.288 17:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:05.288 17:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:05.546 17:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.804 17:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:05.804 17:29:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:05.804 true 00:05:05.804 17:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:05.804 17:29:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.178 17:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.178 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:07.178 17:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:07.178 17:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:07.436 true 00:05:07.436 17:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:07.436 17:29:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.369 17:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.369 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:08.369 17:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:08.369 17:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:08.627 true 00:05:08.627 17:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:08.627 17:29:46 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.559 17:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.559 17:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:09.559 17:29:47 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:09.817 true 00:05:09.817 17:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:09.817 17:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.749 17:29:48 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.750 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:10.750 17:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:10.750 17:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:11.007 true 00:05:11.007 17:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:11.007 17:29:49 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.939 17:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.939 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.939 17:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:11.939 17:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:12.196 true 00:05:12.197 17:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:12.197 17:29:50 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.126 17:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:13.384 17:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:13.384 17:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:13.641 true 00:05:13.641 17:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:13.641 17:29:51 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:14.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.574 17:29:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:14.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:14.574 17:29:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:14.574 17:29:52 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:14.831 true 00:05:14.831 17:29:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:14.831 17:29:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.764 17:29:53 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.764 17:29:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:15.764 17:29:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:16.020 true 00:05:16.020 17:29:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:16.020 17:29:54 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:16.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.958 17:29:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:16.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:16.958 17:29:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:16.958 17:29:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:17.216 true 00:05:17.216 17:29:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:17.216 17:29:55 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.150 17:29:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.150 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.150 17:29:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:18.150 17:29:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:18.407 true 00:05:18.407 17:29:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:18.407 17:29:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.665 17:29:56 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.922 17:29:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:18.922 17:29:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:18.922 true 00:05:19.183 17:29:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:19.183 17:29:57 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.117 17:29:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.375 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.375 17:29:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:20.375 17:29:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:20.375 true 00:05:20.375 17:29:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:20.375 17:29:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:21.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.308 17:29:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:21.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.308 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.566 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:21.566 17:29:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:21.566 17:29:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:21.823 true 00:05:21.823 17:29:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:21.823 17:29:59 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.389 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.647 17:30:00 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.647 17:30:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:22.647 17:30:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:22.904 true 00:05:22.904 17:30:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:22.905 17:30:01 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:23.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.838 17:30:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:23.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:23.838 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.096 17:30:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:24.096 17:30:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:24.096 true 00:05:24.096 17:30:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:24.096 17:30:02 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.029 17:30:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:25.286 17:30:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:25.286 17:30:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:25.286 true 00:05:25.544 17:30:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:25.544 17:30:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:26.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.477 17:30:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.477 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:26.477 17:30:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:26.477 17:30:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:26.735 true 00:05:26.735 17:30:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:26.735 17:30:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.669 17:30:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.669 17:30:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:27.669 17:30:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:27.927 true 00:05:27.927 17:30:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:27.927 17:30:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.862 17:30:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.862 17:30:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:28.862 17:30:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:29.119 true 00:05:29.119 17:30:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:29.119 17:30:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.376 17:30:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.633 17:30:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:29.633 17:30:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:29.890 true 00:05:29.890 17:30:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:29.890 17:30:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.890 17:30:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.148 17:30:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:30.148 17:30:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:30.406 true 00:05:30.406 17:30:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:30.406 17:30:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.665 17:30:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.665 Initializing NVMe Controllers 00:05:30.665 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:05:30.665 Controller IO queue size 128, less than required. 00:05:30.665 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:30.665 Controller IO queue size 128, less than required. 00:05:30.665 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:30.665 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:30.665 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:30.665 Initialization complete. Launching workers. 00:05:30.665 ======================================================== 00:05:30.665 Latency(us) 00:05:30.665 Device Information : IOPS MiB/s Average min max 00:05:30.665 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5842.20 2.85 19715.52 999.15 1138890.80 00:05:30.665 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 32861.56 16.05 3894.96 2036.15 297009.69 00:05:30.665 ======================================================== 00:05:30.665 Total : 38703.76 18.90 6283.02 999.15 1138890.80 00:05:30.665 00:05:30.922 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:30.922 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:30.922 true 00:05:30.922 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 496826 00:05:30.922 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (496826) - No such process 00:05:30.922 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 496826 00:05:30.922 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.179 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:31.437 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:31.437 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:31.437 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:31.437 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:31.437 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:31.695 null0 00:05:31.695 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:31.695 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:31.695 17:30:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:31.695 null1 00:05:31.953 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:31.953 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:31.953 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:31.953 null2 00:05:31.953 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:31.953 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:31.953 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:32.210 null3 00:05:32.210 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:32.210 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:32.210 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:32.468 null4 00:05:32.468 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:32.468 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:32.468 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:32.725 null5 00:05:32.725 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:32.725 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:32.725 17:30:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:32.725 null6 00:05:32.725 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:32.725 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:32.725 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:32.982 null7 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 502179 502180 502182 502184 502186 502188 502190 502192 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:32.982 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:33.239 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.239 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:33.239 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:33.239 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:33.239 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:33.239 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:33.239 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:33.239 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:33.496 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:33.753 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:33.753 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:33.753 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:33.753 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:33.753 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.753 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:33.753 17:30:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:33.753 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:34.010 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:34.011 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.268 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:34.525 17:30:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.782 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.039 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.039 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.039 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.039 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.039 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.039 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:35.039 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.039 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:35.296 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.554 17:30:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:35.812 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.812 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.812 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.812 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.812 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.812 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.812 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.812 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.069 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.327 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.615 17:30:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.882 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:37.156 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:37.156 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:37.157 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:37.157 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:37.157 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:37.157 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.157 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.157 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:37.157 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.157 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:37.422 rmmod nvme_rdma 00:05:37.422 rmmod nvme_fabrics 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 496558 ']' 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 496558 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 496558 ']' 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 496558 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 496558 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 496558' 00:05:37.422 killing process with pid 496558 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 496558 00:05:37.422 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 496558 00:05:37.684 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:37.684 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:05:37.684 00:05:37.684 real 0m48.822s 00:05:37.684 user 3m25.712s 00:05:37.684 sys 0m14.097s 00:05:37.684 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:37.684 17:30:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:37.684 ************************************ 00:05:37.684 END TEST nvmf_ns_hotplug_stress 00:05:37.684 ************************************ 00:05:37.685 17:30:15 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:05:37.685 17:30:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:37.685 17:30:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:37.685 17:30:15 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:37.685 ************************************ 00:05:37.685 START TEST nvmf_delete_subsystem 00:05:37.685 ************************************ 00:05:37.685 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:05:37.945 * Looking for test storage... 00:05:37.945 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.945 --rc genhtml_branch_coverage=1 00:05:37.945 --rc genhtml_function_coverage=1 00:05:37.945 --rc genhtml_legend=1 00:05:37.945 --rc geninfo_all_blocks=1 00:05:37.945 --rc geninfo_unexecuted_blocks=1 00:05:37.945 00:05:37.945 ' 00:05:37.945 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:37.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.945 --rc genhtml_branch_coverage=1 00:05:37.945 --rc genhtml_function_coverage=1 00:05:37.945 --rc genhtml_legend=1 00:05:37.945 --rc geninfo_all_blocks=1 00:05:37.945 --rc geninfo_unexecuted_blocks=1 00:05:37.945 00:05:37.945 ' 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:37.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.946 --rc genhtml_branch_coverage=1 00:05:37.946 --rc genhtml_function_coverage=1 00:05:37.946 --rc genhtml_legend=1 00:05:37.946 --rc geninfo_all_blocks=1 00:05:37.946 --rc geninfo_unexecuted_blocks=1 00:05:37.946 00:05:37.946 ' 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:37.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.946 --rc genhtml_branch_coverage=1 00:05:37.946 --rc genhtml_function_coverage=1 00:05:37.946 --rc genhtml_legend=1 00:05:37.946 --rc geninfo_all_blocks=1 00:05:37.946 --rc geninfo_unexecuted_blocks=1 00:05:37.946 00:05:37.946 ' 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:37.946 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:05:37.946 17:30:16 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:05:44.496 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:05:44.496 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:05:44.496 Found net devices under 0000:18:00.0: mlx_0_0 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:05:44.496 Found net devices under 0000:18:00.1: mlx_0_1 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:05:44.496 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # rdma_device_init 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # uname 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@528 -- # allocate_nic_ips 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:05:44.497 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:44.497 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:05:44.497 altname enp24s0f0np0 00:05:44.497 altname ens785f0np0 00:05:44.497 inet 192.168.100.8/24 scope global mlx_0_0 00:05:44.497 valid_lft forever preferred_lft forever 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:05:44.497 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:44.497 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:05:44.497 altname enp24s0f1np1 00:05:44.497 altname ens785f1np1 00:05:44.497 inet 192.168.100.9/24 scope global mlx_0_1 00:05:44.497 valid_lft forever preferred_lft forever 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@109 -- # continue 2 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:05:44.497 192.168.100.9' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:05:44.497 192.168.100.9' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # head -n 1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:05:44.497 192.168.100.9' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # head -n 1 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # tail -n +2 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.497 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=505971 00:05:44.498 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:44.498 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 505971 00:05:44.498 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 505971 ']' 00:05:44.498 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.498 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.498 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.498 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.498 17:30:22 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.498 [2024-10-17 17:30:22.797403] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:05:44.498 [2024-10-17 17:30:22.797468] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:44.498 [2024-10-17 17:30:22.871729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.756 [2024-10-17 17:30:22.919053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:44.756 [2024-10-17 17:30:22.919094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:44.756 [2024-10-17 17:30:22.919104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.756 [2024-10-17 17:30:22.919127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.756 [2024-10-17 17:30:22.919136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:44.756 [2024-10-17 17:30:22.920193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.756 [2024-10-17 17:30:22.920196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.756 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:44.756 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:05:44.756 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:05:44.756 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.756 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.756 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:44.756 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:05:44.756 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.756 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:44.756 [2024-10-17 17:30:23.085966] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x86dbc0/0x8720b0) succeed. 00:05:44.756 [2024-10-17 17:30:23.095080] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x86f110/0x8b3750) succeed. 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.013 [2024-10-17 17:30:23.181906] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.013 NULL1 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.013 Delay0 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.013 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.014 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:45.014 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.014 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=506031 00:05:45.014 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:45.014 17:30:23 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:45.014 [2024-10-17 17:30:23.288745] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:46.906 17:30:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:46.906 17:30:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:46.906 17:30:25 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.275 NVMe io qpair process completion error 00:05:48.275 NVMe io qpair process completion error 00:05:48.275 NVMe io qpair process completion error 00:05:48.275 NVMe io qpair process completion error 00:05:48.275 NVMe io qpair process completion error 00:05:48.275 NVMe io qpair process completion error 00:05:48.275 17:30:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.275 17:30:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:48.275 17:30:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 506031 00:05:48.276 17:30:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:48.532 17:30:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:48.532 17:30:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 506031 00:05:48.532 17:30:26 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:49.097 Write completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Write completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Write completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Write completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Write completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.097 starting I/O failed: -6 00:05:49.097 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 starting I/O failed: -6 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Write completed with error (sct=0, sc=8) 00:05:49.098 Read completed with error (sct=0, sc=8) 00:05:49.098 Initializing NVMe Controllers 00:05:49.098 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:05:49.098 Controller IO queue size 128, less than required. 00:05:49.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:49.098 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:49.098 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:49.098 Initialization complete. Launching workers. 00:05:49.098 ======================================================== 00:05:49.098 Latency(us) 00:05:49.098 Device Information : IOPS MiB/s Average min max 00:05:49.098 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.43 0.04 1594581.11 1000118.85 2978060.71 00:05:49.098 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.43 0.04 1595599.82 1001285.33 2978363.95 00:05:49.098 ======================================================== 00:05:49.098 Total : 160.86 0.08 1595090.47 1000118.85 2978363.95 00:05:49.098 00:05:49.098 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:49.098 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 506031 00:05:49.098 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:49.098 [2024-10-17 17:30:27.383548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:05:49.098 [2024-10-17 17:30:27.383602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:05:49.098 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 506031 00:05:49.662 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (506031) - No such process 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 506031 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 506031 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 506031 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.662 [2024-10-17 17:30:27.904494] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=506595 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:49.662 17:30:27 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:49.662 [2024-10-17 17:30:27.994255] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:50.226 17:30:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:50.226 17:30:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:50.226 17:30:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:50.789 17:30:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:50.789 17:30:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:50.789 17:30:28 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:51.045 17:30:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:51.045 17:30:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:51.045 17:30:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:51.608 17:30:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:51.608 17:30:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:51.608 17:30:29 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:52.170 17:30:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:52.170 17:30:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:52.170 17:30:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:52.733 17:30:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:52.733 17:30:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:52.733 17:30:30 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:53.296 17:30:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:53.296 17:30:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:53.296 17:30:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:53.859 17:30:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:53.860 17:30:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:53.860 17:30:31 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:54.116 17:30:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:54.116 17:30:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:54.116 17:30:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:54.680 17:30:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:54.680 17:30:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:54.680 17:30:32 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.243 17:30:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:55.243 17:30:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:55.243 17:30:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.806 17:30:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:55.806 17:30:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:55.806 17:30:33 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.369 17:30:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.369 17:30:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:56.369 17:30:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.626 17:30:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:56.626 17:30:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:56.626 17:30:34 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:56.882 Initializing NVMe Controllers 00:05:56.882 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:05:56.882 Controller IO queue size 128, less than required. 00:05:56.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:56.882 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:56.882 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:56.882 Initialization complete. Launching workers. 00:05:56.882 ======================================================== 00:05:56.882 Latency(us) 00:05:56.882 Device Information : IOPS MiB/s Average min max 00:05:56.882 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001307.24 1000056.28 1004076.11 00:05:56.882 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002699.71 1000538.60 1007040.18 00:05:56.883 ======================================================== 00:05:56.883 Total : 256.00 0.12 1002003.47 1000056.28 1007040.18 00:05:56.883 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 506595 00:05:57.140 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (506595) - No such process 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 506595 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:57.140 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:05:57.140 rmmod nvme_rdma 00:05:57.397 rmmod nvme_fabrics 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 505971 ']' 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 505971 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 505971 ']' 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 505971 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 505971 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 505971' 00:05:57.397 killing process with pid 505971 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 505971 00:05:57.397 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 505971 00:05:57.654 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:05:57.654 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:05:57.654 00:05:57.654 real 0m19.827s 00:05:57.654 user 0m48.973s 00:05:57.654 sys 0m6.149s 00:05:57.654 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.654 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:57.654 ************************************ 00:05:57.654 END TEST nvmf_delete_subsystem 00:05:57.654 ************************************ 00:05:57.654 17:30:35 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:05:57.654 17:30:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:57.654 17:30:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:57.654 17:30:35 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.654 ************************************ 00:05:57.654 START TEST nvmf_host_management 00:05:57.654 ************************************ 00:05:57.654 17:30:35 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:05:57.654 * Looking for test storage... 00:05:57.654 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:05:57.654 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:57.654 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:05:57.654 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:57.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.912 --rc genhtml_branch_coverage=1 00:05:57.912 --rc genhtml_function_coverage=1 00:05:57.912 --rc genhtml_legend=1 00:05:57.912 --rc geninfo_all_blocks=1 00:05:57.912 --rc geninfo_unexecuted_blocks=1 00:05:57.912 00:05:57.912 ' 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:57.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.912 --rc genhtml_branch_coverage=1 00:05:57.912 --rc genhtml_function_coverage=1 00:05:57.912 --rc genhtml_legend=1 00:05:57.912 --rc geninfo_all_blocks=1 00:05:57.912 --rc geninfo_unexecuted_blocks=1 00:05:57.912 00:05:57.912 ' 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:57.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.912 --rc genhtml_branch_coverage=1 00:05:57.912 --rc genhtml_function_coverage=1 00:05:57.912 --rc genhtml_legend=1 00:05:57.912 --rc geninfo_all_blocks=1 00:05:57.912 --rc geninfo_unexecuted_blocks=1 00:05:57.912 00:05:57.912 ' 00:05:57.912 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:57.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.912 --rc genhtml_branch_coverage=1 00:05:57.912 --rc genhtml_function_coverage=1 00:05:57.912 --rc genhtml_legend=1 00:05:57.912 --rc geninfo_all_blocks=1 00:05:57.912 --rc geninfo_unexecuted_blocks=1 00:05:57.913 00:05:57.913 ' 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:57.913 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:05:57.913 17:30:36 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.477 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:06:04.478 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:06:04.478 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:04.478 Found net devices under 0000:18:00.0: mlx_0_0 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:04.478 Found net devices under 0000:18:00.1: mlx_0_1 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # rdma_device_init 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # uname 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@528 -- # allocate_nic_ips 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:04.478 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:04.479 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:04.479 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:06:04.479 altname enp24s0f0np0 00:06:04.479 altname ens785f0np0 00:06:04.479 inet 192.168.100.8/24 scope global mlx_0_0 00:06:04.479 valid_lft forever preferred_lft forever 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:04.479 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:04.479 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:06:04.479 altname enp24s0f1np1 00:06:04.479 altname ens785f1np1 00:06:04.479 inet 192.168.100.9/24 scope global mlx_0_1 00:06:04.479 valid_lft forever preferred_lft forever 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@109 -- # continue 2 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:06:04.479 192.168.100.9' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:06:04.479 192.168.100.9' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # head -n 1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:06:04.479 192.168.100.9' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # head -n 1 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # tail -n +2 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:06:04.479 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=510638 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 510638 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 510638 ']' 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.738 17:30:42 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.738 [2024-10-17 17:30:42.928754] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:06:04.738 [2024-10-17 17:30:42.928816] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:04.738 [2024-10-17 17:30:43.001063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.738 [2024-10-17 17:30:43.048785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:04.738 [2024-10-17 17:30:43.048825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:04.738 [2024-10-17 17:30:43.048835] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:04.738 [2024-10-17 17:30:43.048844] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:04.738 [2024-10-17 17:30:43.048851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:04.738 [2024-10-17 17:30:43.050096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.738 [2024-10-17 17:30:43.050177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.738 [2024-10-17 17:30:43.050278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.738 [2024-10-17 17:30:43.050279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.996 [2024-10-17 17:30:43.228962] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa045c0/0xa08ab0) succeed. 00:06:04.996 [2024-10-17 17:30:43.239686] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa05c50/0xa4a150) succeed. 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:04.996 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.255 Malloc0 00:06:05.255 [2024-10-17 17:30:43.441765] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=510857 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 510857 /var/tmp/bdevperf.sock 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 510857 ']' 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:05.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:05.255 { 00:06:05.255 "params": { 00:06:05.255 "name": "Nvme$subsystem", 00:06:05.255 "trtype": "$TEST_TRANSPORT", 00:06:05.255 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:05.255 "adrfam": "ipv4", 00:06:05.255 "trsvcid": "$NVMF_PORT", 00:06:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:05.255 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:05.255 "hdgst": ${hdgst:-false}, 00:06:05.255 "ddgst": ${ddgst:-false} 00:06:05.255 }, 00:06:05.255 "method": "bdev_nvme_attach_controller" 00:06:05.255 } 00:06:05.255 EOF 00:06:05.255 )") 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:05.255 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:05.255 "params": { 00:06:05.255 "name": "Nvme0", 00:06:05.255 "trtype": "rdma", 00:06:05.255 "traddr": "192.168.100.8", 00:06:05.255 "adrfam": "ipv4", 00:06:05.255 "trsvcid": "4420", 00:06:05.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:05.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:05.255 "hdgst": false, 00:06:05.255 "ddgst": false 00:06:05.255 }, 00:06:05.255 "method": "bdev_nvme_attach_controller" 00:06:05.255 }' 00:06:05.255 [2024-10-17 17:30:43.551260] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:06:05.255 [2024-10-17 17:30:43.551321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid510857 ] 00:06:05.255 [2024-10-17 17:30:43.625157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.514 [2024-10-17 17:30:43.669337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.514 Running I/O for 10 seconds... 00:06:05.514 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.514 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:05.514 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:05.514 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.514 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:05.772 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:05.773 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:05.773 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.773 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.773 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.773 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:05.773 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:05.773 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.773 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:05.773 17:30:43 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:06.707 288.00 IOPS, 18.00 MiB/s [2024-10-17T15:30:45.098Z] [2024-10-17 17:30:44.977674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:06.707 [2024-10-17 17:30:44.977710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:cff200 sqhd:8770 p:0 m:0 dnr:0 00:06:06.707 [2024-10-17 17:30:44.977722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:06.707 [2024-10-17 17:30:44.977732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:cff200 sqhd:8770 p:0 m:0 dnr:0 00:06:06.707 [2024-10-17 17:30:44.977742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:06.707 [2024-10-17 17:30:44.977751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:cff200 sqhd:8770 p:0 m:0 dnr:0 00:06:06.707 [2024-10-17 17:30:44.977762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:06:06.707 [2024-10-17 17:30:44.977771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32765 cdw0:cff200 sqhd:8770 p:0 m:0 dnr:0 00:06:06.707 [2024-10-17 17:30:44.979245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:06.707 [2024-10-17 17:30:44.979263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:06:06.707 [2024-10-17 17:30:44.979282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:40960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106d4500 len:0x10000 key:0x181b00 00:06:06.707 [2024-10-17 17:30:44.979293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.707 [2024-10-17 17:30:44.979318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106c4480 len:0x10000 key:0x181b00 00:06:06.707 [2024-10-17 17:30:44.979328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.707 [2024-10-17 17:30:44.979342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106b4400 len:0x10000 key:0x181b00 00:06:06.707 [2024-10-17 17:30:44.979352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.707 [2024-10-17 17:30:44.979369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000106a4380 len:0x10000 key:0x181b00 00:06:06.707 [2024-10-17 17:30:44.979378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.707 [2024-10-17 17:30:44.979391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010694300 len:0x10000 key:0x181b00 00:06:06.707 [2024-10-17 17:30:44.979401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.707 [2024-10-17 17:30:44.979415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010684280 len:0x10000 key:0x181b00 00:06:06.707 [2024-10-17 17:30:44.979430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010674200 len:0x10000 key:0x181b00 00:06:06.708 [2024-10-17 17:30:44.979453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010664180 len:0x10000 key:0x181b00 00:06:06.708 [2024-10-17 17:30:44.979475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010654100 len:0x10000 key:0x181b00 00:06:06.708 [2024-10-17 17:30:44.979498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010644080 len:0x10000 key:0x181b00 00:06:06.708 [2024-10-17 17:30:44.979520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010634000 len:0x10000 key:0x181b00 00:06:06.708 [2024-10-17 17:30:44.979542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010623f80 len:0x10000 key:0x181b00 00:06:06.708 [2024-10-17 17:30:44.979565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010613f00 len:0x10000 key:0x181b00 00:06:06.708 [2024-10-17 17:30:44.979587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010603e80 len:0x10000 key:0x181b00 00:06:06.708 [2024-10-17 17:30:44.979610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000168d1e80 len:0x10000 key:0x181100 00:06:06.708 [2024-10-17 17:30:44.979635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000168c1e00 len:0x10000 key:0x181100 00:06:06.708 [2024-10-17 17:30:44.979657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000168b1d80 len:0x10000 key:0x181100 00:06:06.708 [2024-10-17 17:30:44.979680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000168a1d00 len:0x10000 key:0x181100 00:06:06.708 [2024-10-17 17:30:44.979703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016891c80 len:0x10000 key:0x181100 00:06:06.708 [2024-10-17 17:30:44.979725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016881c00 len:0x10000 key:0x181100 00:06:06.708 [2024-10-17 17:30:44.979748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016871b80 len:0x10000 key:0x181100 00:06:06.708 [2024-10-17 17:30:44.979770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016861b00 len:0x10000 key:0x181100 00:06:06.708 [2024-10-17 17:30:44.979792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a195000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.979816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a174000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.979839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a153000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.979862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a132000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.979886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a111000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.979909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a0f0000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.979932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ef000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.979955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ce000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.979978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.979991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a4ad000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a48c000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a46b000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a44a000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a429000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a408000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3e7000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3c6000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a3a5000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a384000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a363000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a342000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a321000 len:0x10000 key:0x181a00 00:06:06.708 [2024-10-17 17:30:44.980280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.708 [2024-10-17 17:30:44.980293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a300000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6ff000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6de000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a6bd000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a69c000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a67b000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a65a000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a639000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a618000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a5f7000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000a5d6000 len:0x10000 key:0x181a00 00:06:06.709 [2024-10-17 17:30:44.980539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016851a80 len:0x10000 key:0x181100 00:06:06.709 [2024-10-17 17:30:44.980564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016841a00 len:0x10000 key:0x181100 00:06:06.709 [2024-10-17 17:30:44.980587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016831980 len:0x10000 key:0x181100 00:06:06.709 [2024-10-17 17:30:44.980609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016821900 len:0x10000 key:0x181100 00:06:06.709 [2024-10-17 17:30:44.980632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016811880 len:0x10000 key:0x181100 00:06:06.709 [2024-10-17 17:30:44.980656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200016801800 len:0x10000 key:0x181100 00:06:06.709 [2024-10-17 17:30:44.980681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000166eff80 len:0x10000 key:0x180b00 00:06:06.709 [2024-10-17 17:30:44.980703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000166dff00 len:0x10000 key:0x180b00 00:06:06.709 [2024-10-17 17:30:44.980726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000166cfe80 len:0x10000 key:0x180b00 00:06:06.709 [2024-10-17 17:30:44.980748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 [2024-10-17 17:30:44.980760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000166bfe00 len:0x10000 key:0x180b00 00:06:06.709 [2024-10-17 17:30:44.980770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:92f5e000 sqhd:7250 p:0 m:0 dnr:0 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 510857 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:06:06.709 { 00:06:06.709 "params": { 00:06:06.709 "name": "Nvme$subsystem", 00:06:06.709 "trtype": "$TEST_TRANSPORT", 00:06:06.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:06.709 "adrfam": "ipv4", 00:06:06.709 "trsvcid": "$NVMF_PORT", 00:06:06.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:06.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:06.709 "hdgst": ${hdgst:-false}, 00:06:06.709 "ddgst": ${ddgst:-false} 00:06:06.709 }, 00:06:06.709 "method": "bdev_nvme_attach_controller" 00:06:06.709 } 00:06:06.709 EOF 00:06:06.709 )") 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:06:06.709 17:30:44 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:06:06.709 17:30:45 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:06:06.709 "params": { 00:06:06.709 "name": "Nvme0", 00:06:06.709 "trtype": "rdma", 00:06:06.709 "traddr": "192.168.100.8", 00:06:06.709 "adrfam": "ipv4", 00:06:06.709 "trsvcid": "4420", 00:06:06.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:06.709 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:06.709 "hdgst": false, 00:06:06.709 "ddgst": false 00:06:06.709 }, 00:06:06.709 "method": "bdev_nvme_attach_controller" 00:06:06.709 }' 00:06:06.709 [2024-10-17 17:30:45.034110] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:06:06.709 [2024-10-17 17:30:45.034173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid511048 ] 00:06:06.967 [2024-10-17 17:30:45.107820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.967 [2024-10-17 17:30:45.152656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.967 Running I/O for 1 seconds... 00:06:08.342 2989.00 IOPS, 186.81 MiB/s 00:06:08.342 Latency(us) 00:06:08.342 [2024-10-17T15:30:46.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:08.342 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:08.342 Verification LBA range: start 0x0 length 0x400 00:06:08.342 Nvme0n1 : 1.01 3038.35 189.90 0.00 0.00 20640.80 480.83 42170.99 00:06:08.342 [2024-10-17T15:30:46.733Z] =================================================================================================================== 00:06:08.342 [2024-10-17T15:30:46.733Z] Total : 3038.35 189.90 0.00 0.00 20640.80 480.83 42170.99 00:06:08.342 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 510857 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:08.342 rmmod nvme_rdma 00:06:08.342 rmmod nvme_fabrics 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 510638 ']' 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 510638 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 510638 ']' 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 510638 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 510638 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 510638' 00:06:08.342 killing process with pid 510638 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 510638 00:06:08.342 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 510638 00:06:08.601 [2024-10-17 17:30:46.910482] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:08.601 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:08.601 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:06:08.601 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:08.601 00:06:08.601 real 0m11.007s 00:06:08.601 user 0m20.062s 00:06:08.601 sys 0m6.197s 00:06:08.601 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.601 17:30:46 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:08.601 ************************************ 00:06:08.601 END TEST nvmf_host_management 00:06:08.601 ************************************ 00:06:08.601 17:30:46 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:08.601 17:30:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:08.601 17:30:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.601 17:30:46 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:08.860 ************************************ 00:06:08.860 START TEST nvmf_lvol 00:06:08.860 ************************************ 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:06:08.860 * Looking for test storage... 00:06:08.860 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.860 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:08.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.860 --rc genhtml_branch_coverage=1 00:06:08.860 --rc genhtml_function_coverage=1 00:06:08.860 --rc genhtml_legend=1 00:06:08.860 --rc geninfo_all_blocks=1 00:06:08.861 --rc geninfo_unexecuted_blocks=1 00:06:08.861 00:06:08.861 ' 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:08.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.861 --rc genhtml_branch_coverage=1 00:06:08.861 --rc genhtml_function_coverage=1 00:06:08.861 --rc genhtml_legend=1 00:06:08.861 --rc geninfo_all_blocks=1 00:06:08.861 --rc geninfo_unexecuted_blocks=1 00:06:08.861 00:06:08.861 ' 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:08.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.861 --rc genhtml_branch_coverage=1 00:06:08.861 --rc genhtml_function_coverage=1 00:06:08.861 --rc genhtml_legend=1 00:06:08.861 --rc geninfo_all_blocks=1 00:06:08.861 --rc geninfo_unexecuted_blocks=1 00:06:08.861 00:06:08.861 ' 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:08.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.861 --rc genhtml_branch_coverage=1 00:06:08.861 --rc genhtml_function_coverage=1 00:06:08.861 --rc genhtml_legend=1 00:06:08.861 --rc geninfo_all_blocks=1 00:06:08.861 --rc geninfo_unexecuted_blocks=1 00:06:08.861 00:06:08.861 ' 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.861 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:09.119 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:09.119 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:09.120 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:09.120 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:09.120 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:09.120 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:09.120 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:09.120 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:09.120 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:09.120 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:09.120 17:30:47 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.681 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:06:15.682 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:06:15.682 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:15.682 Found net devices under 0000:18:00.0: mlx_0_0 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:15.682 Found net devices under 0000:18:00.1: mlx_0_1 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # rdma_device_init 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # uname 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@528 -- # allocate_nic_ips 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:15.682 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:15.682 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:06:15.682 altname enp24s0f0np0 00:06:15.682 altname ens785f0np0 00:06:15.682 inet 192.168.100.8/24 scope global mlx_0_0 00:06:15.682 valid_lft forever preferred_lft forever 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:15.682 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:15.682 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:06:15.682 altname enp24s0f1np1 00:06:15.682 altname ens785f1np1 00:06:15.682 inet 192.168.100.9/24 scope global mlx_0_1 00:06:15.682 valid_lft forever preferred_lft forever 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@109 -- # continue 2 00:06:15.682 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:06:15.683 192.168.100.9' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:06:15.683 192.168.100.9' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # head -n 1 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # tail -n +2 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:06:15.683 192.168.100.9' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # head -n 1 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=514189 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 514189 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 514189 ']' 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:15.683 [2024-10-17 17:30:53.635236] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:06:15.683 [2024-10-17 17:30:53.635294] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:15.683 [2024-10-17 17:30:53.707161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.683 [2024-10-17 17:30:53.752793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:15.683 [2024-10-17 17:30:53.752840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:15.683 [2024-10-17 17:30:53.752850] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:15.683 [2024-10-17 17:30:53.752875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:15.683 [2024-10-17 17:30:53.752882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:15.683 [2024-10-17 17:30:53.754138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.683 [2024-10-17 17:30:53.754241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.683 [2024-10-17 17:30:53.754244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.683 17:30:53 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:15.941 [2024-10-17 17:30:54.084833] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf547b0/0xf58ca0) succeed. 00:06:15.941 [2024-10-17 17:30:54.094909] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf55da0/0xf9a340) succeed. 00:06:15.941 17:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:16.200 17:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:16.200 17:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:16.458 17:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:16.458 17:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:16.458 17:30:54 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:16.716 17:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f9bef80b-a444-42a9-96d7-dfb23df2fdb1 00:06:16.716 17:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9bef80b-a444-42a9-96d7-dfb23df2fdb1 lvol 20 00:06:16.973 17:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f2723029-2e3f-425d-8dd7-44f0c57c0e02 00:06:16.973 17:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:17.231 17:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2723029-2e3f-425d-8dd7-44f0c57c0e02 00:06:17.488 17:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:17.488 [2024-10-17 17:30:55.847186] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:17.488 17:30:55 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:17.746 17:30:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=514580 00:06:17.746 17:30:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:17.746 17:30:56 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:19.119 17:30:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f2723029-2e3f-425d-8dd7-44f0c57c0e02 MY_SNAPSHOT 00:06:19.119 17:30:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=00a5e3cf-1350-4fd0-9ef1-df96b312c67c 00:06:19.119 17:30:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f2723029-2e3f-425d-8dd7-44f0c57c0e02 30 00:06:19.377 17:30:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 00a5e3cf-1350-4fd0-9ef1-df96b312c67c MY_CLONE 00:06:19.377 17:30:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=61987d19-fb28-406e-a8cd-75173ed947a4 00:06:19.377 17:30:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 61987d19-fb28-406e-a8cd-75173ed947a4 00:06:19.635 17:30:57 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 514580 00:06:29.685 Initializing NVMe Controllers 00:06:29.685 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:06:29.685 Controller IO queue size 128, less than required. 00:06:29.685 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.685 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:29.685 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:29.685 Initialization complete. Launching workers. 00:06:29.685 ======================================================== 00:06:29.685 Latency(us) 00:06:29.685 Device Information : IOPS MiB/s Average min max 00:06:29.685 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16395.70 64.05 7809.52 2198.50 38994.44 00:06:29.685 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16258.10 63.51 7875.28 3695.12 48981.93 00:06:29.685 ======================================================== 00:06:29.685 Total : 32653.80 127.55 7842.26 2198.50 48981.93 00:06:29.685 00:06:29.685 17:31:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:29.685 17:31:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f2723029-2e3f-425d-8dd7-44f0c57c0e02 00:06:29.685 17:31:07 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9bef80b-a444-42a9-96d7-dfb23df2fdb1 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:06:29.967 rmmod nvme_rdma 00:06:29.967 rmmod nvme_fabrics 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 514189 ']' 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 514189 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 514189 ']' 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 514189 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 514189 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 514189' 00:06:29.967 killing process with pid 514189 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 514189 00:06:29.967 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 514189 00:06:30.234 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:06:30.234 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:06:30.234 00:06:30.234 real 0m21.454s 00:06:30.234 user 1m10.533s 00:06:30.234 sys 0m6.176s 00:06:30.234 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.234 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:30.234 ************************************ 00:06:30.234 END TEST nvmf_lvol 00:06:30.235 ************************************ 00:06:30.235 17:31:08 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:06:30.235 17:31:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:30.235 17:31:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.235 17:31:08 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.235 ************************************ 00:06:30.235 START TEST nvmf_lvs_grow 00:06:30.235 ************************************ 00:06:30.235 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:06:30.493 * Looking for test storage... 00:06:30.493 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:30.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.493 --rc genhtml_branch_coverage=1 00:06:30.493 --rc genhtml_function_coverage=1 00:06:30.493 --rc genhtml_legend=1 00:06:30.493 --rc geninfo_all_blocks=1 00:06:30.493 --rc geninfo_unexecuted_blocks=1 00:06:30.493 00:06:30.493 ' 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:30.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.493 --rc genhtml_branch_coverage=1 00:06:30.493 --rc genhtml_function_coverage=1 00:06:30.493 --rc genhtml_legend=1 00:06:30.493 --rc geninfo_all_blocks=1 00:06:30.493 --rc geninfo_unexecuted_blocks=1 00:06:30.493 00:06:30.493 ' 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:30.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.493 --rc genhtml_branch_coverage=1 00:06:30.493 --rc genhtml_function_coverage=1 00:06:30.493 --rc genhtml_legend=1 00:06:30.493 --rc geninfo_all_blocks=1 00:06:30.493 --rc geninfo_unexecuted_blocks=1 00:06:30.493 00:06:30.493 ' 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:30.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.493 --rc genhtml_branch_coverage=1 00:06:30.493 --rc genhtml_function_coverage=1 00:06:30.493 --rc genhtml_legend=1 00:06:30.493 --rc geninfo_all_blocks=1 00:06:30.493 --rc geninfo_unexecuted_blocks=1 00:06:30.493 00:06:30.493 ' 00:06:30.493 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:30.494 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:06:30.494 17:31:08 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:06:37.046 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.046 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:06:37.046 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:06:37.047 Found net devices under 0000:18:00.0: mlx_0_0 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:06:37.047 Found net devices under 0000:18:00.1: mlx_0_1 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # rdma_device_init 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # uname 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe ib_cm 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe ib_core 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe ib_umad 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@70 -- # modprobe iw_cm 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@528 -- # allocate_nic_ips 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # get_rdma_if_list 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:06:37.047 17:31:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:06:37.047 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:37.047 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:06:37.047 altname enp24s0f0np0 00:06:37.047 altname ens785f0np0 00:06:37.047 inet 192.168.100.8/24 scope global mlx_0_0 00:06:37.047 valid_lft forever preferred_lft forever 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:06:37.047 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:37.047 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:06:37.047 altname enp24s0f1np1 00:06:37.047 altname ens785f1np1 00:06:37.047 inet 192.168.100.9/24 scope global mlx_0_1 00:06:37.047 valid_lft forever preferred_lft forever 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # get_rdma_if_list 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_0 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@108 -- # echo mlx_0_1 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@109 -- # continue 2 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # cut -d/ -f1 00:06:37.047 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # awk '{print $4}' 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:06:37.048 192.168.100.9' 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:06:37.048 192.168.100.9' 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # head -n 1 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:06:37.048 192.168.100.9' 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # tail -n +2 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # head -n 1 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=519178 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 519178 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 519178 ']' 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:37.048 [2024-10-17 17:31:15.181939] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:06:37.048 [2024-10-17 17:31:15.181997] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.048 [2024-10-17 17:31:15.253764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.048 [2024-10-17 17:31:15.299626] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.048 [2024-10-17 17:31:15.299675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.048 [2024-10-17 17:31:15.299685] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.048 [2024-10-17 17:31:15.299711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.048 [2024-10-17 17:31:15.299718] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.048 [2024-10-17 17:31:15.300211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.048 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:37.305 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.305 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:06:37.305 [2024-10-17 17:31:15.637358] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5ad080/0x5b1570) succeed. 00:06:37.305 [2024-10-17 17:31:15.645891] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5ae530/0x5f2c10) succeed. 00:06:37.305 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:37.305 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.305 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.305 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:37.563 ************************************ 00:06:37.563 START TEST lvs_grow_clean 00:06:37.563 ************************************ 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:37.563 17:31:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:37.820 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:37.820 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:37.820 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:38.078 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:38.078 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:38.078 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 36bbbfd1-f454-439b-8ed2-0563363ed084 lvol 150 00:06:38.335 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2 00:06:38.335 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:38.335 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:38.335 [2024-10-17 17:31:16.706432] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:38.335 [2024-10-17 17:31:16.706514] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:38.335 true 00:06:38.335 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:38.335 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:38.593 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:38.593 17:31:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:38.850 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2 00:06:39.106 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:39.106 [2024-10-17 17:31:17.464700] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:39.106 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:39.363 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=519497 00:06:39.363 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:39.363 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:39.364 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 519497 /var/tmp/bdevperf.sock 00:06:39.364 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 519497 ']' 00:06:39.364 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:39.364 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.364 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:39.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:39.364 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.364 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:39.364 [2024-10-17 17:31:17.715670] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:06:39.364 [2024-10-17 17:31:17.715736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid519497 ] 00:06:39.621 [2024-10-17 17:31:17.790466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.621 [2024-10-17 17:31:17.835799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.621 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.621 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:06:39.621 17:31:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:39.878 Nvme0n1 00:06:39.878 17:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:40.136 [ 00:06:40.136 { 00:06:40.136 "name": "Nvme0n1", 00:06:40.136 "aliases": [ 00:06:40.136 "ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2" 00:06:40.136 ], 00:06:40.136 "product_name": "NVMe disk", 00:06:40.136 "block_size": 4096, 00:06:40.136 "num_blocks": 38912, 00:06:40.136 "uuid": "ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2", 00:06:40.136 "numa_id": 0, 00:06:40.136 "assigned_rate_limits": { 00:06:40.136 "rw_ios_per_sec": 0, 00:06:40.136 "rw_mbytes_per_sec": 0, 00:06:40.136 "r_mbytes_per_sec": 0, 00:06:40.136 "w_mbytes_per_sec": 0 00:06:40.136 }, 00:06:40.136 "claimed": false, 00:06:40.136 "zoned": false, 00:06:40.136 "supported_io_types": { 00:06:40.136 "read": true, 00:06:40.136 "write": true, 00:06:40.136 "unmap": true, 00:06:40.136 "flush": true, 00:06:40.136 "reset": true, 00:06:40.136 "nvme_admin": true, 00:06:40.136 "nvme_io": true, 00:06:40.136 "nvme_io_md": false, 00:06:40.136 "write_zeroes": true, 00:06:40.136 "zcopy": false, 00:06:40.136 "get_zone_info": false, 00:06:40.136 "zone_management": false, 00:06:40.136 "zone_append": false, 00:06:40.136 "compare": true, 00:06:40.136 "compare_and_write": true, 00:06:40.136 "abort": true, 00:06:40.136 "seek_hole": false, 00:06:40.136 "seek_data": false, 00:06:40.136 "copy": true, 00:06:40.136 "nvme_iov_md": false 00:06:40.136 }, 00:06:40.136 "memory_domains": [ 00:06:40.136 { 00:06:40.136 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:06:40.136 "dma_device_type": 0 00:06:40.136 } 00:06:40.136 ], 00:06:40.136 "driver_specific": { 00:06:40.136 "nvme": [ 00:06:40.136 { 00:06:40.136 "trid": { 00:06:40.136 "trtype": "RDMA", 00:06:40.136 "adrfam": "IPv4", 00:06:40.136 "traddr": "192.168.100.8", 00:06:40.136 "trsvcid": "4420", 00:06:40.136 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:40.136 }, 00:06:40.136 "ctrlr_data": { 00:06:40.136 "cntlid": 1, 00:06:40.136 "vendor_id": "0x8086", 00:06:40.136 "model_number": "SPDK bdev Controller", 00:06:40.136 "serial_number": "SPDK0", 00:06:40.136 "firmware_revision": "25.01", 00:06:40.136 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:40.136 "oacs": { 00:06:40.136 "security": 0, 00:06:40.136 "format": 0, 00:06:40.136 "firmware": 0, 00:06:40.136 "ns_manage": 0 00:06:40.136 }, 00:06:40.136 "multi_ctrlr": true, 00:06:40.136 "ana_reporting": false 00:06:40.136 }, 00:06:40.136 "vs": { 00:06:40.136 "nvme_version": "1.3" 00:06:40.136 }, 00:06:40.136 "ns_data": { 00:06:40.136 "id": 1, 00:06:40.136 "can_share": true 00:06:40.136 } 00:06:40.136 } 00:06:40.136 ], 00:06:40.136 "mp_policy": "active_passive" 00:06:40.136 } 00:06:40.136 } 00:06:40.136 ] 00:06:40.136 17:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=519599 00:06:40.136 17:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:40.136 17:31:18 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:40.136 Running I/O for 10 seconds... 00:06:41.506 Latency(us) 00:06:41.506 [2024-10-17T15:31:19.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:41.506 Nvme0n1 : 1.00 33695.00 131.62 0.00 0.00 0.00 0.00 0.00 00:06:41.506 [2024-10-17T15:31:19.897Z] =================================================================================================================== 00:06:41.506 [2024-10-17T15:31:19.897Z] Total : 33695.00 131.62 0.00 0.00 0.00 0.00 0.00 00:06:41.506 00:06:42.070 17:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:42.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:42.328 Nvme0n1 : 2.00 33617.50 131.32 0.00 0.00 0.00 0.00 0.00 00:06:42.328 [2024-10-17T15:31:20.719Z] =================================================================================================================== 00:06:42.328 [2024-10-17T15:31:20.719Z] Total : 33617.50 131.32 0.00 0.00 0.00 0.00 0.00 00:06:42.328 00:06:42.328 true 00:06:42.328 17:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:42.328 17:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:42.586 17:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:42.586 17:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:42.586 17:31:20 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 519599 00:06:43.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:43.150 Nvme0n1 : 3.00 33802.67 132.04 0.00 0.00 0.00 0.00 0.00 00:06:43.150 [2024-10-17T15:31:21.541Z] =================================================================================================================== 00:06:43.150 [2024-10-17T15:31:21.541Z] Total : 33802.67 132.04 0.00 0.00 0.00 0.00 0.00 00:06:43.150 00:06:44.219 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:44.219 Nvme0n1 : 4.00 33943.75 132.59 0.00 0.00 0.00 0.00 0.00 00:06:44.219 [2024-10-17T15:31:22.610Z] =================================================================================================================== 00:06:44.219 [2024-10-17T15:31:22.610Z] Total : 33943.75 132.59 0.00 0.00 0.00 0.00 0.00 00:06:44.219 00:06:45.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.155 Nvme0n1 : 5.00 34041.40 132.97 0.00 0.00 0.00 0.00 0.00 00:06:45.155 [2024-10-17T15:31:23.546Z] =================================================================================================================== 00:06:45.155 [2024-10-17T15:31:23.546Z] Total : 34041.40 132.97 0.00 0.00 0.00 0.00 0.00 00:06:45.155 00:06:46.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.531 Nvme0n1 : 6.00 34128.33 133.31 0.00 0.00 0.00 0.00 0.00 00:06:46.531 [2024-10-17T15:31:24.922Z] =================================================================================================================== 00:06:46.531 [2024-10-17T15:31:24.922Z] Total : 34128.33 133.31 0.00 0.00 0.00 0.00 0.00 00:06:46.531 00:06:47.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.466 Nvme0n1 : 7.00 34194.43 133.57 0.00 0.00 0.00 0.00 0.00 00:06:47.466 [2024-10-17T15:31:25.857Z] =================================================================================================================== 00:06:47.466 [2024-10-17T15:31:25.857Z] Total : 34194.43 133.57 0.00 0.00 0.00 0.00 0.00 00:06:47.466 00:06:48.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.401 Nvme0n1 : 8.00 34247.88 133.78 0.00 0.00 0.00 0.00 0.00 00:06:48.401 [2024-10-17T15:31:26.792Z] =================================================================================================================== 00:06:48.401 [2024-10-17T15:31:26.792Z] Total : 34247.88 133.78 0.00 0.00 0.00 0.00 0.00 00:06:48.401 00:06:49.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.348 Nvme0n1 : 9.00 34286.67 133.93 0.00 0.00 0.00 0.00 0.00 00:06:49.348 [2024-10-17T15:31:27.739Z] =================================================================================================================== 00:06:49.348 [2024-10-17T15:31:27.739Z] Total : 34286.67 133.93 0.00 0.00 0.00 0.00 0.00 00:06:49.348 00:06:50.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.287 Nvme0n1 : 10.00 34322.70 134.07 0.00 0.00 0.00 0.00 0.00 00:06:50.287 [2024-10-17T15:31:28.678Z] =================================================================================================================== 00:06:50.287 [2024-10-17T15:31:28.678Z] Total : 34322.70 134.07 0.00 0.00 0.00 0.00 0.00 00:06:50.287 00:06:50.287 00:06:50.287 Latency(us) 00:06:50.287 [2024-10-17T15:31:28.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.287 Nvme0n1 : 10.00 34322.28 134.07 0.00 0.00 3726.59 2592.95 7921.31 00:06:50.287 [2024-10-17T15:31:28.678Z] =================================================================================================================== 00:06:50.287 [2024-10-17T15:31:28.678Z] Total : 34322.28 134.07 0.00 0.00 3726.59 2592.95 7921.31 00:06:50.287 { 00:06:50.287 "results": [ 00:06:50.287 { 00:06:50.287 "job": "Nvme0n1", 00:06:50.287 "core_mask": "0x2", 00:06:50.287 "workload": "randwrite", 00:06:50.287 "status": "finished", 00:06:50.287 "queue_depth": 128, 00:06:50.287 "io_size": 4096, 00:06:50.287 "runtime": 10.003095, 00:06:50.287 "iops": 34322.27725518952, 00:06:50.287 "mibps": 134.07139552808405, 00:06:50.287 "io_failed": 0, 00:06:50.287 "io_timeout": 0, 00:06:50.287 "avg_latency_us": 3726.5865205474734, 00:06:50.287 "min_latency_us": 2592.946086956522, 00:06:50.287 "max_latency_us": 7921.307826086956 00:06:50.287 } 00:06:50.287 ], 00:06:50.287 "core_count": 1 00:06:50.287 } 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 519497 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 519497 ']' 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 519497 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 519497 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 519497' 00:06:50.287 killing process with pid 519497 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 519497 00:06:50.287 Received shutdown signal, test time was about 10.000000 seconds 00:06:50.287 00:06:50.287 Latency(us) 00:06:50.287 [2024-10-17T15:31:28.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:50.287 [2024-10-17T15:31:28.678Z] =================================================================================================================== 00:06:50.287 [2024-10-17T15:31:28.678Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:50.287 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 519497 00:06:50.546 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:50.805 17:31:28 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:51.064 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:51.064 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:51.064 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:51.064 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:51.064 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:51.323 [2024-10-17 17:31:29.570316] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:06:51.323 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:51.581 request: 00:06:51.581 { 00:06:51.581 "uuid": "36bbbfd1-f454-439b-8ed2-0563363ed084", 00:06:51.581 "method": "bdev_lvol_get_lvstores", 00:06:51.581 "req_id": 1 00:06:51.581 } 00:06:51.581 Got JSON-RPC error response 00:06:51.581 response: 00:06:51.581 { 00:06:51.581 "code": -19, 00:06:51.581 "message": "No such device" 00:06:51.581 } 00:06:51.581 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:06:51.581 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:51.582 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:51.582 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:51.582 17:31:29 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:51.840 aio_bdev 00:06:51.840 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2 00:06:51.841 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2 00:06:51.841 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:06:51.841 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:06:51.841 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:06:51.841 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:06:51.841 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:51.841 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2 -t 2000 00:06:52.099 [ 00:06:52.099 { 00:06:52.099 "name": "ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2", 00:06:52.099 "aliases": [ 00:06:52.099 "lvs/lvol" 00:06:52.099 ], 00:06:52.099 "product_name": "Logical Volume", 00:06:52.099 "block_size": 4096, 00:06:52.100 "num_blocks": 38912, 00:06:52.100 "uuid": "ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2", 00:06:52.100 "assigned_rate_limits": { 00:06:52.100 "rw_ios_per_sec": 0, 00:06:52.100 "rw_mbytes_per_sec": 0, 00:06:52.100 "r_mbytes_per_sec": 0, 00:06:52.100 "w_mbytes_per_sec": 0 00:06:52.100 }, 00:06:52.100 "claimed": false, 00:06:52.100 "zoned": false, 00:06:52.100 "supported_io_types": { 00:06:52.100 "read": true, 00:06:52.100 "write": true, 00:06:52.100 "unmap": true, 00:06:52.100 "flush": false, 00:06:52.100 "reset": true, 00:06:52.100 "nvme_admin": false, 00:06:52.100 "nvme_io": false, 00:06:52.100 "nvme_io_md": false, 00:06:52.100 "write_zeroes": true, 00:06:52.100 "zcopy": false, 00:06:52.100 "get_zone_info": false, 00:06:52.100 "zone_management": false, 00:06:52.100 "zone_append": false, 00:06:52.100 "compare": false, 00:06:52.100 "compare_and_write": false, 00:06:52.100 "abort": false, 00:06:52.100 "seek_hole": true, 00:06:52.100 "seek_data": true, 00:06:52.100 "copy": false, 00:06:52.100 "nvme_iov_md": false 00:06:52.100 }, 00:06:52.100 "driver_specific": { 00:06:52.100 "lvol": { 00:06:52.100 "lvol_store_uuid": "36bbbfd1-f454-439b-8ed2-0563363ed084", 00:06:52.100 "base_bdev": "aio_bdev", 00:06:52.100 "thin_provision": false, 00:06:52.100 "num_allocated_clusters": 38, 00:06:52.100 "snapshot": false, 00:06:52.100 "clone": false, 00:06:52.100 "esnap_clone": false 00:06:52.100 } 00:06:52.100 } 00:06:52.100 } 00:06:52.100 ] 00:06:52.100 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:06:52.100 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:52.100 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:52.358 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:52.358 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:52.358 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:52.616 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:52.616 17:31:30 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ff3704ea-aaf8-44b0-b51d-0d8ec1d112e2 00:06:52.874 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 36bbbfd1-f454-439b-8ed2-0563363ed084 00:06:52.874 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:53.132 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.132 00:06:53.132 real 0m15.718s 00:06:53.132 user 0m15.563s 00:06:53.132 sys 0m1.204s 00:06:53.132 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.132 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:53.132 ************************************ 00:06:53.132 END TEST lvs_grow_clean 00:06:53.132 ************************************ 00:06:53.132 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:53.132 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:53.132 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.132 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:53.390 ************************************ 00:06:53.390 START TEST lvs_grow_dirty 00:06:53.390 ************************************ 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:53.390 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:53.648 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:06:53.648 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:06:53.648 17:31:31 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:53.931 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:53.931 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:53.931 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a lvol 150 00:06:54.189 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e8528372-ed60-4dda-84c1-dcdaa35e6881 00:06:54.189 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:54.190 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:54.190 [2024-10-17 17:31:32.516341] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:54.190 [2024-10-17 17:31:32.516403] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:54.190 true 00:06:54.190 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:06:54.190 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:54.447 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:54.447 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:54.704 17:31:32 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e8528372-ed60-4dda-84c1-dcdaa35e6881 00:06:54.962 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:06:54.962 [2024-10-17 17:31:33.286661] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:54.962 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=521667 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 521667 /var/tmp/bdevperf.sock 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 521667 ']' 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:55.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:55.219 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:55.219 [2024-10-17 17:31:33.541856] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:06:55.219 [2024-10-17 17:31:33.541916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid521667 ] 00:06:55.477 [2024-10-17 17:31:33.612922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.477 [2024-10-17 17:31:33.654872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.477 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.477 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:06:55.477 17:31:33 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:55.734 Nvme0n1 00:06:55.734 17:31:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:55.991 [ 00:06:55.991 { 00:06:55.991 "name": "Nvme0n1", 00:06:55.991 "aliases": [ 00:06:55.991 "e8528372-ed60-4dda-84c1-dcdaa35e6881" 00:06:55.991 ], 00:06:55.991 "product_name": "NVMe disk", 00:06:55.991 "block_size": 4096, 00:06:55.991 "num_blocks": 38912, 00:06:55.991 "uuid": "e8528372-ed60-4dda-84c1-dcdaa35e6881", 00:06:55.991 "numa_id": 0, 00:06:55.991 "assigned_rate_limits": { 00:06:55.991 "rw_ios_per_sec": 0, 00:06:55.991 "rw_mbytes_per_sec": 0, 00:06:55.991 "r_mbytes_per_sec": 0, 00:06:55.991 "w_mbytes_per_sec": 0 00:06:55.991 }, 00:06:55.991 "claimed": false, 00:06:55.991 "zoned": false, 00:06:55.991 "supported_io_types": { 00:06:55.991 "read": true, 00:06:55.991 "write": true, 00:06:55.991 "unmap": true, 00:06:55.991 "flush": true, 00:06:55.991 "reset": true, 00:06:55.991 "nvme_admin": true, 00:06:55.991 "nvme_io": true, 00:06:55.991 "nvme_io_md": false, 00:06:55.991 "write_zeroes": true, 00:06:55.991 "zcopy": false, 00:06:55.991 "get_zone_info": false, 00:06:55.991 "zone_management": false, 00:06:55.991 "zone_append": false, 00:06:55.991 "compare": true, 00:06:55.991 "compare_and_write": true, 00:06:55.991 "abort": true, 00:06:55.991 "seek_hole": false, 00:06:55.991 "seek_data": false, 00:06:55.991 "copy": true, 00:06:55.991 "nvme_iov_md": false 00:06:55.991 }, 00:06:55.991 "memory_domains": [ 00:06:55.991 { 00:06:55.991 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:06:55.991 "dma_device_type": 0 00:06:55.991 } 00:06:55.991 ], 00:06:55.991 "driver_specific": { 00:06:55.991 "nvme": [ 00:06:55.991 { 00:06:55.991 "trid": { 00:06:55.991 "trtype": "RDMA", 00:06:55.991 "adrfam": "IPv4", 00:06:55.991 "traddr": "192.168.100.8", 00:06:55.991 "trsvcid": "4420", 00:06:55.991 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:55.991 }, 00:06:55.991 "ctrlr_data": { 00:06:55.991 "cntlid": 1, 00:06:55.991 "vendor_id": "0x8086", 00:06:55.991 "model_number": "SPDK bdev Controller", 00:06:55.991 "serial_number": "SPDK0", 00:06:55.991 "firmware_revision": "25.01", 00:06:55.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:55.991 "oacs": { 00:06:55.991 "security": 0, 00:06:55.991 "format": 0, 00:06:55.991 "firmware": 0, 00:06:55.991 "ns_manage": 0 00:06:55.991 }, 00:06:55.991 "multi_ctrlr": true, 00:06:55.991 "ana_reporting": false 00:06:55.991 }, 00:06:55.991 "vs": { 00:06:55.991 "nvme_version": "1.3" 00:06:55.991 }, 00:06:55.991 "ns_data": { 00:06:55.991 "id": 1, 00:06:55.991 "can_share": true 00:06:55.991 } 00:06:55.991 } 00:06:55.991 ], 00:06:55.991 "mp_policy": "active_passive" 00:06:55.991 } 00:06:55.991 } 00:06:55.991 ] 00:06:55.991 17:31:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=521758 00:06:55.991 17:31:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:55.991 17:31:34 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:55.991 Running I/O for 10 seconds... 00:06:57.364 Latency(us) 00:06:57.364 [2024-10-17T15:31:35.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:57.364 Nvme0n1 : 1.00 33728.00 131.75 0.00 0.00 0.00 0.00 0.00 00:06:57.364 [2024-10-17T15:31:35.755Z] =================================================================================================================== 00:06:57.364 [2024-10-17T15:31:35.755Z] Total : 33728.00 131.75 0.00 0.00 0.00 0.00 0.00 00:06:57.364 00:06:57.930 17:31:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:06:58.189 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.189 Nvme0n1 : 2.00 34017.50 132.88 0.00 0.00 0.00 0.00 0.00 00:06:58.189 [2024-10-17T15:31:36.580Z] =================================================================================================================== 00:06:58.189 [2024-10-17T15:31:36.580Z] Total : 34017.50 132.88 0.00 0.00 0.00 0.00 0.00 00:06:58.189 00:06:58.189 true 00:06:58.189 17:31:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:06:58.189 17:31:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:58.447 17:31:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:58.447 17:31:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:58.447 17:31:36 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 521758 00:06:59.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.011 Nvme0n1 : 3.00 34090.33 133.17 0.00 0.00 0.00 0.00 0.00 00:06:59.011 [2024-10-17T15:31:37.402Z] =================================================================================================================== 00:06:59.011 [2024-10-17T15:31:37.402Z] Total : 34090.33 133.17 0.00 0.00 0.00 0.00 0.00 00:06:59.011 00:06:59.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.945 Nvme0n1 : 4.00 34193.25 133.57 0.00 0.00 0.00 0.00 0.00 00:06:59.945 [2024-10-17T15:31:38.336Z] =================================================================================================================== 00:06:59.945 [2024-10-17T15:31:38.336Z] Total : 34193.25 133.57 0.00 0.00 0.00 0.00 0.00 00:06:59.945 00:07:01.318 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.318 Nvme0n1 : 5.00 34271.40 133.87 0.00 0.00 0.00 0.00 0.00 00:07:01.318 [2024-10-17T15:31:39.709Z] =================================================================================================================== 00:07:01.318 [2024-10-17T15:31:39.709Z] Total : 34271.40 133.87 0.00 0.00 0.00 0.00 0.00 00:07:01.318 00:07:02.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.250 Nvme0n1 : 6.00 34283.33 133.92 0.00 0.00 0.00 0.00 0.00 00:07:02.250 [2024-10-17T15:31:40.641Z] =================================================================================================================== 00:07:02.250 [2024-10-17T15:31:40.641Z] Total : 34283.33 133.92 0.00 0.00 0.00 0.00 0.00 00:07:02.250 00:07:03.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.185 Nvme0n1 : 7.00 34318.14 134.06 0.00 0.00 0.00 0.00 0.00 00:07:03.185 [2024-10-17T15:31:41.576Z] =================================================================================================================== 00:07:03.185 [2024-10-17T15:31:41.576Z] Total : 34318.14 134.06 0.00 0.00 0.00 0.00 0.00 00:07:03.185 00:07:04.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.119 Nvme0n1 : 8.00 34359.88 134.22 0.00 0.00 0.00 0.00 0.00 00:07:04.119 [2024-10-17T15:31:42.510Z] =================================================================================================================== 00:07:04.119 [2024-10-17T15:31:42.510Z] Total : 34359.88 134.22 0.00 0.00 0.00 0.00 0.00 00:07:04.119 00:07:05.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.053 Nvme0n1 : 9.00 34329.11 134.10 0.00 0.00 0.00 0.00 0.00 00:07:05.053 [2024-10-17T15:31:43.444Z] =================================================================================================================== 00:07:05.053 [2024-10-17T15:31:43.444Z] Total : 34329.11 134.10 0.00 0.00 0.00 0.00 0.00 00:07:05.053 00:07:05.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.987 Nvme0n1 : 10.00 34355.00 134.20 0.00 0.00 0.00 0.00 0.00 00:07:05.987 [2024-10-17T15:31:44.378Z] =================================================================================================================== 00:07:05.987 [2024-10-17T15:31:44.378Z] Total : 34355.00 134.20 0.00 0.00 0.00 0.00 0.00 00:07:05.987 00:07:05.987 00:07:05.987 Latency(us) 00:07:05.987 [2024-10-17T15:31:44.378Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:05.987 Nvme0n1 : 10.00 34353.27 134.19 0.00 0.00 3723.20 2706.92 8833.11 00:07:05.987 [2024-10-17T15:31:44.378Z] =================================================================================================================== 00:07:05.987 [2024-10-17T15:31:44.378Z] Total : 34353.27 134.19 0.00 0.00 3723.20 2706.92 8833.11 00:07:05.987 { 00:07:05.987 "results": [ 00:07:05.987 { 00:07:05.987 "job": "Nvme0n1", 00:07:05.987 "core_mask": "0x2", 00:07:05.987 "workload": "randwrite", 00:07:05.987 "status": "finished", 00:07:05.987 "queue_depth": 128, 00:07:05.987 "io_size": 4096, 00:07:05.987 "runtime": 10.003299, 00:07:05.987 "iops": 34353.26685726379, 00:07:05.987 "mibps": 134.1924486611867, 00:07:05.987 "io_failed": 0, 00:07:05.987 "io_timeout": 0, 00:07:05.987 "avg_latency_us": 3723.203076087653, 00:07:05.987 "min_latency_us": 2706.9217391304346, 00:07:05.987 "max_latency_us": 8833.11304347826 00:07:05.987 } 00:07:05.987 ], 00:07:05.987 "core_count": 1 00:07:05.987 } 00:07:05.987 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 521667 00:07:05.987 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 521667 ']' 00:07:05.987 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 521667 00:07:05.987 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:06.244 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:06.244 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 521667 00:07:06.244 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:06.244 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:06.244 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 521667' 00:07:06.244 killing process with pid 521667 00:07:06.244 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 521667 00:07:06.244 Received shutdown signal, test time was about 10.000000 seconds 00:07:06.244 00:07:06.245 Latency(us) 00:07:06.245 [2024-10-17T15:31:44.636Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.245 [2024-10-17T15:31:44.636Z] =================================================================================================================== 00:07:06.245 [2024-10-17T15:31:44.636Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:06.245 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 521667 00:07:06.245 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:06.502 17:31:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:06.760 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:07:06.760 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:07.018 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:07.018 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:07.018 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 519178 00:07:07.018 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 519178 00:07:07.018 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 519178 Killed "${NVMF_APP[@]}" "$@" 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=523226 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 523226 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 523226 ']' 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.019 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:07.019 [2024-10-17 17:31:45.310110] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:07:07.019 [2024-10-17 17:31:45.310175] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.019 [2024-10-17 17:31:45.381537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.277 [2024-10-17 17:31:45.427813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.277 [2024-10-17 17:31:45.427854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.277 [2024-10-17 17:31:45.427863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.277 [2024-10-17 17:31:45.427872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.277 [2024-10-17 17:31:45.427879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.277 [2024-10-17 17:31:45.428322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.277 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.277 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:07.277 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:07.277 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:07.277 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:07.277 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.277 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:07.535 [2024-10-17 17:31:45.752678] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:07.535 [2024-10-17 17:31:45.752778] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:07.535 [2024-10-17 17:31:45.752805] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:07.535 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:07.535 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e8528372-ed60-4dda-84c1-dcdaa35e6881 00:07:07.535 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e8528372-ed60-4dda-84c1-dcdaa35e6881 00:07:07.535 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:07.535 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:07.535 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:07.535 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:07.535 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:07.793 17:31:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8528372-ed60-4dda-84c1-dcdaa35e6881 -t 2000 00:07:07.793 [ 00:07:07.793 { 00:07:07.793 "name": "e8528372-ed60-4dda-84c1-dcdaa35e6881", 00:07:07.793 "aliases": [ 00:07:07.793 "lvs/lvol" 00:07:07.793 ], 00:07:07.793 "product_name": "Logical Volume", 00:07:07.793 "block_size": 4096, 00:07:07.793 "num_blocks": 38912, 00:07:07.793 "uuid": "e8528372-ed60-4dda-84c1-dcdaa35e6881", 00:07:07.793 "assigned_rate_limits": { 00:07:07.793 "rw_ios_per_sec": 0, 00:07:07.793 "rw_mbytes_per_sec": 0, 00:07:07.793 "r_mbytes_per_sec": 0, 00:07:07.793 "w_mbytes_per_sec": 0 00:07:07.793 }, 00:07:07.793 "claimed": false, 00:07:07.793 "zoned": false, 00:07:07.793 "supported_io_types": { 00:07:07.793 "read": true, 00:07:07.793 "write": true, 00:07:07.793 "unmap": true, 00:07:07.793 "flush": false, 00:07:07.793 "reset": true, 00:07:07.793 "nvme_admin": false, 00:07:07.793 "nvme_io": false, 00:07:07.793 "nvme_io_md": false, 00:07:07.793 "write_zeroes": true, 00:07:07.793 "zcopy": false, 00:07:07.793 "get_zone_info": false, 00:07:07.793 "zone_management": false, 00:07:07.793 "zone_append": false, 00:07:07.793 "compare": false, 00:07:07.793 "compare_and_write": false, 00:07:07.793 "abort": false, 00:07:07.793 "seek_hole": true, 00:07:07.793 "seek_data": true, 00:07:07.793 "copy": false, 00:07:07.793 "nvme_iov_md": false 00:07:07.793 }, 00:07:07.793 "driver_specific": { 00:07:07.793 "lvol": { 00:07:07.793 "lvol_store_uuid": "9ec0d6f3-560c-49eb-b847-67f0e76fbf2a", 00:07:07.793 "base_bdev": "aio_bdev", 00:07:07.793 "thin_provision": false, 00:07:07.793 "num_allocated_clusters": 38, 00:07:07.793 "snapshot": false, 00:07:07.794 "clone": false, 00:07:07.794 "esnap_clone": false 00:07:07.794 } 00:07:07.794 } 00:07:07.794 } 00:07:07.794 ] 00:07:07.794 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:07.794 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:07:07.794 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:08.051 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:08.051 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:07:08.051 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:08.310 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:08.310 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:08.568 [2024-10-17 17:31:46.725490] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:08.568 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:07:08.568 request: 00:07:08.568 { 00:07:08.568 "uuid": "9ec0d6f3-560c-49eb-b847-67f0e76fbf2a", 00:07:08.568 "method": "bdev_lvol_get_lvstores", 00:07:08.568 "req_id": 1 00:07:08.568 } 00:07:08.568 Got JSON-RPC error response 00:07:08.568 response: 00:07:08.568 { 00:07:08.568 "code": -19, 00:07:08.568 "message": "No such device" 00:07:08.568 } 00:07:08.826 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:08.826 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:08.826 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:08.826 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:08.826 17:31:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:08.826 aio_bdev 00:07:08.826 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e8528372-ed60-4dda-84c1-dcdaa35e6881 00:07:08.826 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e8528372-ed60-4dda-84c1-dcdaa35e6881 00:07:08.826 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:08.826 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:08.826 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:08.826 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:08.826 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:09.093 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e8528372-ed60-4dda-84c1-dcdaa35e6881 -t 2000 00:07:09.352 [ 00:07:09.352 { 00:07:09.352 "name": "e8528372-ed60-4dda-84c1-dcdaa35e6881", 00:07:09.352 "aliases": [ 00:07:09.352 "lvs/lvol" 00:07:09.352 ], 00:07:09.352 "product_name": "Logical Volume", 00:07:09.352 "block_size": 4096, 00:07:09.352 "num_blocks": 38912, 00:07:09.352 "uuid": "e8528372-ed60-4dda-84c1-dcdaa35e6881", 00:07:09.352 "assigned_rate_limits": { 00:07:09.352 "rw_ios_per_sec": 0, 00:07:09.352 "rw_mbytes_per_sec": 0, 00:07:09.352 "r_mbytes_per_sec": 0, 00:07:09.352 "w_mbytes_per_sec": 0 00:07:09.352 }, 00:07:09.352 "claimed": false, 00:07:09.352 "zoned": false, 00:07:09.352 "supported_io_types": { 00:07:09.352 "read": true, 00:07:09.352 "write": true, 00:07:09.352 "unmap": true, 00:07:09.352 "flush": false, 00:07:09.352 "reset": true, 00:07:09.352 "nvme_admin": false, 00:07:09.352 "nvme_io": false, 00:07:09.352 "nvme_io_md": false, 00:07:09.352 "write_zeroes": true, 00:07:09.352 "zcopy": false, 00:07:09.352 "get_zone_info": false, 00:07:09.352 "zone_management": false, 00:07:09.352 "zone_append": false, 00:07:09.352 "compare": false, 00:07:09.352 "compare_and_write": false, 00:07:09.352 "abort": false, 00:07:09.352 "seek_hole": true, 00:07:09.352 "seek_data": true, 00:07:09.352 "copy": false, 00:07:09.352 "nvme_iov_md": false 00:07:09.352 }, 00:07:09.352 "driver_specific": { 00:07:09.352 "lvol": { 00:07:09.352 "lvol_store_uuid": "9ec0d6f3-560c-49eb-b847-67f0e76fbf2a", 00:07:09.352 "base_bdev": "aio_bdev", 00:07:09.352 "thin_provision": false, 00:07:09.352 "num_allocated_clusters": 38, 00:07:09.352 "snapshot": false, 00:07:09.352 "clone": false, 00:07:09.352 "esnap_clone": false 00:07:09.352 } 00:07:09.352 } 00:07:09.352 } 00:07:09.352 ] 00:07:09.353 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:09.353 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:07:09.353 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:09.353 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:09.353 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:07:09.353 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:09.611 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:09.611 17:31:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e8528372-ed60-4dda-84c1-dcdaa35e6881 00:07:09.869 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9ec0d6f3-560c-49eb-b847-67f0e76fbf2a 00:07:10.126 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:10.126 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:10.385 00:07:10.385 real 0m16.991s 00:07:10.385 user 0m44.389s 00:07:10.385 sys 0m3.456s 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 ************************************ 00:07:10.385 END TEST lvs_grow_dirty 00:07:10.385 ************************************ 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:10.385 nvmf_trace.0 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:10.385 rmmod nvme_rdma 00:07:10.385 rmmod nvme_fabrics 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 523226 ']' 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 523226 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 523226 ']' 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 523226 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 523226 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 523226' 00:07:10.385 killing process with pid 523226 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 523226 00:07:10.385 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 523226 00:07:10.644 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:10.644 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:10.644 00:07:10.644 real 0m40.326s 00:07:10.644 user 1m5.610s 00:07:10.644 sys 0m10.121s 00:07:10.644 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.644 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:10.644 ************************************ 00:07:10.644 END TEST nvmf_lvs_grow 00:07:10.644 ************************************ 00:07:10.644 17:31:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:10.644 17:31:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:10.644 17:31:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.644 17:31:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.644 ************************************ 00:07:10.644 START TEST nvmf_bdev_io_wait 00:07:10.644 ************************************ 00:07:10.644 17:31:48 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:07:10.904 * Looking for test storage... 00:07:10.904 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:10.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.904 --rc genhtml_branch_coverage=1 00:07:10.904 --rc genhtml_function_coverage=1 00:07:10.904 --rc genhtml_legend=1 00:07:10.904 --rc geninfo_all_blocks=1 00:07:10.904 --rc geninfo_unexecuted_blocks=1 00:07:10.904 00:07:10.904 ' 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:10.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.904 --rc genhtml_branch_coverage=1 00:07:10.904 --rc genhtml_function_coverage=1 00:07:10.904 --rc genhtml_legend=1 00:07:10.904 --rc geninfo_all_blocks=1 00:07:10.904 --rc geninfo_unexecuted_blocks=1 00:07:10.904 00:07:10.904 ' 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:10.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.904 --rc genhtml_branch_coverage=1 00:07:10.904 --rc genhtml_function_coverage=1 00:07:10.904 --rc genhtml_legend=1 00:07:10.904 --rc geninfo_all_blocks=1 00:07:10.904 --rc geninfo_unexecuted_blocks=1 00:07:10.904 00:07:10.904 ' 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:10.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.904 --rc genhtml_branch_coverage=1 00:07:10.904 --rc genhtml_function_coverage=1 00:07:10.904 --rc genhtml_legend=1 00:07:10.904 --rc geninfo_all_blocks=1 00:07:10.904 --rc geninfo_unexecuted_blocks=1 00:07:10.904 00:07:10.904 ' 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.904 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.905 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:10.905 17:31:49 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:17.468 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:07:17.469 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:07:17.469 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:17.469 Found net devices under 0000:18:00.0: mlx_0_0 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:17.469 Found net devices under 0000:18:00.1: mlx_0_1 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # rdma_device_init 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # uname 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:17.469 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:17.469 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:07:17.469 altname enp24s0f0np0 00:07:17.469 altname ens785f0np0 00:07:17.469 inet 192.168.100.8/24 scope global mlx_0_0 00:07:17.469 valid_lft forever preferred_lft forever 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:17.469 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:17.469 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:07:17.469 altname enp24s0f1np1 00:07:17.469 altname ens785f1np1 00:07:17.469 inet 192.168.100.9/24 scope global mlx_0_1 00:07:17.469 valid_lft forever preferred_lft forever 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:17.469 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@109 -- # continue 2 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:17.470 192.168.100.9' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:17.470 192.168.100.9' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # head -n 1 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:17.470 192.168.100.9' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # head -n 1 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # tail -n +2 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=526689 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 526689 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 526689 ']' 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.470 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.470 [2024-10-17 17:31:55.824486] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:07:17.470 [2024-10-17 17:31:55.824545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.730 [2024-10-17 17:31:55.897276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.730 [2024-10-17 17:31:55.946048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.730 [2024-10-17 17:31:55.946098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.730 [2024-10-17 17:31:55.946108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.730 [2024-10-17 17:31:55.946116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.730 [2024-10-17 17:31:55.946123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.730 [2024-10-17 17:31:55.947528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.730 [2024-10-17 17:31:55.947616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.730 [2024-10-17 17:31:55.947693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.730 [2024-10-17 17:31:55.947695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.730 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.730 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:17.730 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:17.730 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.730 17:31:55 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.730 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.989 [2024-10-17 17:31:56.126208] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x950370/0x954860) succeed. 00:07:17.989 [2024-10-17 17:31:56.136409] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x951a00/0x995f00) succeed. 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.989 Malloc0 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:17.989 [2024-10-17 17:31:56.320057] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=526811 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=526814 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:17.989 { 00:07:17.989 "params": { 00:07:17.989 "name": "Nvme$subsystem", 00:07:17.989 "trtype": "$TEST_TRANSPORT", 00:07:17.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.989 "adrfam": "ipv4", 00:07:17.989 "trsvcid": "$NVMF_PORT", 00:07:17.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.989 "hdgst": ${hdgst:-false}, 00:07:17.989 "ddgst": ${ddgst:-false} 00:07:17.989 }, 00:07:17.989 "method": "bdev_nvme_attach_controller" 00:07:17.989 } 00:07:17.989 EOF 00:07:17.989 )") 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=526817 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:17.989 { 00:07:17.989 "params": { 00:07:17.989 "name": "Nvme$subsystem", 00:07:17.989 "trtype": "$TEST_TRANSPORT", 00:07:17.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.989 "adrfam": "ipv4", 00:07:17.989 "trsvcid": "$NVMF_PORT", 00:07:17.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.989 "hdgst": ${hdgst:-false}, 00:07:17.989 "ddgst": ${ddgst:-false} 00:07:17.989 }, 00:07:17.989 "method": "bdev_nvme_attach_controller" 00:07:17.989 } 00:07:17.989 EOF 00:07:17.989 )") 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=526821 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:17.989 { 00:07:17.989 "params": { 00:07:17.989 "name": "Nvme$subsystem", 00:07:17.989 "trtype": "$TEST_TRANSPORT", 00:07:17.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.989 "adrfam": "ipv4", 00:07:17.989 "trsvcid": "$NVMF_PORT", 00:07:17.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.989 "hdgst": ${hdgst:-false}, 00:07:17.989 "ddgst": ${ddgst:-false} 00:07:17.989 }, 00:07:17.989 "method": "bdev_nvme_attach_controller" 00:07:17.989 } 00:07:17.989 EOF 00:07:17.989 )") 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:07:17.989 { 00:07:17.989 "params": { 00:07:17.989 "name": "Nvme$subsystem", 00:07:17.989 "trtype": "$TEST_TRANSPORT", 00:07:17.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.989 "adrfam": "ipv4", 00:07:17.989 "trsvcid": "$NVMF_PORT", 00:07:17.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.989 "hdgst": ${hdgst:-false}, 00:07:17.989 "ddgst": ${ddgst:-false} 00:07:17.989 }, 00:07:17.989 "method": "bdev_nvme_attach_controller" 00:07:17.989 } 00:07:17.989 EOF 00:07:17.989 )") 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 526811 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:17.989 "params": { 00:07:17.989 "name": "Nvme1", 00:07:17.989 "trtype": "rdma", 00:07:17.989 "traddr": "192.168.100.8", 00:07:17.989 "adrfam": "ipv4", 00:07:17.989 "trsvcid": "4420", 00:07:17.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:17.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:17.989 "hdgst": false, 00:07:17.989 "ddgst": false 00:07:17.989 }, 00:07:17.989 "method": "bdev_nvme_attach_controller" 00:07:17.989 }' 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:17.989 "params": { 00:07:17.989 "name": "Nvme1", 00:07:17.989 "trtype": "rdma", 00:07:17.989 "traddr": "192.168.100.8", 00:07:17.989 "adrfam": "ipv4", 00:07:17.989 "trsvcid": "4420", 00:07:17.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:17.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:17.989 "hdgst": false, 00:07:17.989 "ddgst": false 00:07:17.989 }, 00:07:17.989 "method": "bdev_nvme_attach_controller" 00:07:17.989 }' 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:17.989 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:17.989 "params": { 00:07:17.989 "name": "Nvme1", 00:07:17.989 "trtype": "rdma", 00:07:17.989 "traddr": "192.168.100.8", 00:07:17.989 "adrfam": "ipv4", 00:07:17.989 "trsvcid": "4420", 00:07:17.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:17.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:17.989 "hdgst": false, 00:07:17.989 "ddgst": false 00:07:17.990 }, 00:07:17.990 "method": "bdev_nvme_attach_controller" 00:07:17.990 }' 00:07:17.990 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:07:17.990 17:31:56 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:07:17.990 "params": { 00:07:17.990 "name": "Nvme1", 00:07:17.990 "trtype": "rdma", 00:07:17.990 "traddr": "192.168.100.8", 00:07:17.990 "adrfam": "ipv4", 00:07:17.990 "trsvcid": "4420", 00:07:17.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:17.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:17.990 "hdgst": false, 00:07:17.990 "ddgst": false 00:07:17.990 }, 00:07:17.990 "method": "bdev_nvme_attach_controller" 00:07:17.990 }' 00:07:17.990 [2024-10-17 17:31:56.370692] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:07:17.990 [2024-10-17 17:31:56.370693] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:07:17.990 [2024-10-17 17:31:56.370753] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-17 17:31:56.370753] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:17.990 --proc-type=auto ] 00:07:17.990 [2024-10-17 17:31:56.377853] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:07:17.990 [2024-10-17 17:31:56.377908] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:17.990 [2024-10-17 17:31:56.378450] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:07:17.990 [2024-10-17 17:31:56.378496] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:18.248 [2024-10-17 17:31:56.560829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.248 [2024-10-17 17:31:56.603329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:18.505 [2024-10-17 17:31:56.655612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.505 [2024-10-17 17:31:56.697734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.505 [2024-10-17 17:31:56.780040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.505 [2024-10-17 17:31:56.834087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.505 [2024-10-17 17:31:56.837534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:18.505 [2024-10-17 17:31:56.875720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:18.762 Running I/O for 1 seconds... 00:07:18.762 Running I/O for 1 seconds... 00:07:18.762 Running I/O for 1 seconds... 00:07:18.762 Running I/O for 1 seconds... 00:07:19.715 17250.00 IOPS, 67.38 MiB/s [2024-10-17T15:31:58.106Z] 14454.00 IOPS, 56.46 MiB/s 00:07:19.715 Latency(us) 00:07:19.715 [2024-10-17T15:31:58.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.715 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:19.715 Nvme1n1 : 1.01 17288.05 67.53 0.00 0.00 7381.20 4673.00 16526.47 00:07:19.715 [2024-10-17T15:31:58.106Z] =================================================================================================================== 00:07:19.715 [2024-10-17T15:31:58.106Z] Total : 17288.05 67.53 0.00 0.00 7381.20 4673.00 16526.47 00:07:19.715 00:07:19.715 Latency(us) 00:07:19.715 [2024-10-17T15:31:58.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.715 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:19.715 Nvme1n1 : 1.01 14495.51 56.62 0.00 0.00 8801.06 5584.81 19033.93 00:07:19.715 [2024-10-17T15:31:58.106Z] =================================================================================================================== 00:07:19.715 [2024-10-17T15:31:58.106Z] Total : 14495.51 56.62 0.00 0.00 8801.06 5584.81 19033.93 00:07:19.715 17147.00 IOPS, 66.98 MiB/s 00:07:19.715 Latency(us) 00:07:19.715 [2024-10-17T15:31:58.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.715 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:19.715 Nvme1n1 : 1.01 17237.39 67.33 0.00 0.00 7408.96 2920.63 17894.18 00:07:19.715 [2024-10-17T15:31:58.106Z] =================================================================================================================== 00:07:19.715 [2024-10-17T15:31:58.106Z] Total : 17237.39 67.33 0.00 0.00 7408.96 2920.63 17894.18 00:07:19.715 254904.00 IOPS, 995.72 MiB/s 00:07:19.715 Latency(us) 00:07:19.715 [2024-10-17T15:31:58.106Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.715 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:19.715 Nvme1n1 : 1.00 254515.21 994.20 0.00 0.00 500.38 221.72 1980.33 00:07:19.715 [2024-10-17T15:31:58.106Z] =================================================================================================================== 00:07:19.715 [2024-10-17T15:31:58.106Z] Total : 254515.21 994.20 0.00 0.00 500.38 221.72 1980.33 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 526814 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 526817 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 526821 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:19.974 rmmod nvme_rdma 00:07:19.974 rmmod nvme_fabrics 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 526689 ']' 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 526689 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 526689 ']' 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 526689 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 526689 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 526689' 00:07:19.974 killing process with pid 526689 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 526689 00:07:19.974 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 526689 00:07:20.240 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:20.240 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:20.240 00:07:20.240 real 0m9.583s 00:07:20.240 user 0m17.375s 00:07:20.240 sys 0m6.509s 00:07:20.240 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.240 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:20.240 ************************************ 00:07:20.240 END TEST nvmf_bdev_io_wait 00:07:20.240 ************************************ 00:07:20.240 17:31:58 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:07:20.240 17:31:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:20.240 17:31:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.240 17:31:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.498 ************************************ 00:07:20.498 START TEST nvmf_queue_depth 00:07:20.498 ************************************ 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:07:20.498 * Looking for test storage... 00:07:20.498 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.498 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:20.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.498 --rc genhtml_branch_coverage=1 00:07:20.499 --rc genhtml_function_coverage=1 00:07:20.499 --rc genhtml_legend=1 00:07:20.499 --rc geninfo_all_blocks=1 00:07:20.499 --rc geninfo_unexecuted_blocks=1 00:07:20.499 00:07:20.499 ' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:20.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.499 --rc genhtml_branch_coverage=1 00:07:20.499 --rc genhtml_function_coverage=1 00:07:20.499 --rc genhtml_legend=1 00:07:20.499 --rc geninfo_all_blocks=1 00:07:20.499 --rc geninfo_unexecuted_blocks=1 00:07:20.499 00:07:20.499 ' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:20.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.499 --rc genhtml_branch_coverage=1 00:07:20.499 --rc genhtml_function_coverage=1 00:07:20.499 --rc genhtml_legend=1 00:07:20.499 --rc geninfo_all_blocks=1 00:07:20.499 --rc geninfo_unexecuted_blocks=1 00:07:20.499 00:07:20.499 ' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:20.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.499 --rc genhtml_branch_coverage=1 00:07:20.499 --rc genhtml_function_coverage=1 00:07:20.499 --rc genhtml_legend=1 00:07:20.499 --rc geninfo_all_blocks=1 00:07:20.499 --rc geninfo_unexecuted_blocks=1 00:07:20.499 00:07:20.499 ' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:20.499 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:20.499 17:31:58 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:07:27.064 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:07:27.064 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:27.064 Found net devices under 0000:18:00.0: mlx_0_0 00:07:27.064 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:27.065 Found net devices under 0000:18:00.1: mlx_0_1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # rdma_device_init 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # uname 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:27.065 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:27.065 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:07:27.065 altname enp24s0f0np0 00:07:27.065 altname ens785f0np0 00:07:27.065 inet 192.168.100.8/24 scope global mlx_0_0 00:07:27.065 valid_lft forever preferred_lft forever 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:27.065 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:27.065 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:07:27.065 altname enp24s0f1np1 00:07:27.065 altname ens785f1np1 00:07:27.065 inet 192.168.100.9/24 scope global mlx_0_1 00:07:27.065 valid_lft forever preferred_lft forever 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@109 -- # continue 2 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:27.065 192.168.100.9' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:27.065 192.168.100.9' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # head -n 1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:27.065 192.168.100.9' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # tail -n +2 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # head -n 1 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=530042 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 530042 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 530042 ']' 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.065 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.065 [2024-10-17 17:32:05.363755] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:07:27.065 [2024-10-17 17:32:05.363815] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:27.065 [2024-10-17 17:32:05.441031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.324 [2024-10-17 17:32:05.487337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:27.324 [2024-10-17 17:32:05.487379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:27.324 [2024-10-17 17:32:05.487388] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:27.324 [2024-10-17 17:32:05.487397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:27.324 [2024-10-17 17:32:05.487404] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:27.324 [2024-10-17 17:32:05.487893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.324 [2024-10-17 17:32:05.652825] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd253b0/0xd298a0) succeed. 00:07:27.324 [2024-10-17 17:32:05.661734] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd26860/0xd6af40) succeed. 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.324 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.583 Malloc0 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.583 [2024-10-17 17:32:05.754911] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=530093 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 530093 /var/tmp/bdevperf.sock 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 530093 ']' 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:27.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.583 17:32:05 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.583 [2024-10-17 17:32:05.805320] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:07:27.583 [2024-10-17 17:32:05.805377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530093 ] 00:07:27.583 [2024-10-17 17:32:05.878946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.583 [2024-10-17 17:32:05.926665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.842 17:32:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.842 17:32:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:07:27.842 17:32:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:27.842 17:32:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.842 17:32:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:27.842 NVMe0n1 00:07:27.842 17:32:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.842 17:32:06 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:27.842 Running I/O for 10 seconds... 00:07:30.156 16807.00 IOPS, 65.65 MiB/s [2024-10-17T15:32:09.483Z] 17185.50 IOPS, 67.13 MiB/s [2024-10-17T15:32:10.421Z] 17330.33 IOPS, 67.70 MiB/s [2024-10-17T15:32:11.357Z] 17353.75 IOPS, 67.79 MiB/s [2024-10-17T15:32:12.316Z] 17372.40 IOPS, 67.86 MiB/s [2024-10-17T15:32:13.253Z] 17395.17 IOPS, 67.95 MiB/s [2024-10-17T15:32:14.625Z] 17399.14 IOPS, 67.97 MiB/s [2024-10-17T15:32:15.558Z] 17408.00 IOPS, 68.00 MiB/s [2024-10-17T15:32:16.492Z] 17408.00 IOPS, 68.00 MiB/s [2024-10-17T15:32:16.492Z] 17408.10 IOPS, 68.00 MiB/s 00:07:38.101 Latency(us) 00:07:38.101 [2024-10-17T15:32:16.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.101 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:38.101 Verification LBA range: start 0x0 length 0x4000 00:07:38.101 NVMe0n1 : 10.03 17451.05 68.17 0.00 0.00 58533.28 22681.15 38295.82 00:07:38.101 [2024-10-17T15:32:16.492Z] =================================================================================================================== 00:07:38.101 [2024-10-17T15:32:16.492Z] Total : 17451.05 68.17 0.00 0.00 58533.28 22681.15 38295.82 00:07:38.101 { 00:07:38.101 "results": [ 00:07:38.101 { 00:07:38.101 "job": "NVMe0n1", 00:07:38.101 "core_mask": "0x1", 00:07:38.101 "workload": "verify", 00:07:38.101 "status": "finished", 00:07:38.101 "verify_range": { 00:07:38.101 "start": 0, 00:07:38.101 "length": 16384 00:07:38.101 }, 00:07:38.101 "queue_depth": 1024, 00:07:38.101 "io_size": 4096, 00:07:38.101 "runtime": 10.03401, 00:07:38.101 "iops": 17451.04898241082, 00:07:38.101 "mibps": 68.16816008754226, 00:07:38.101 "io_failed": 0, 00:07:38.101 "io_timeout": 0, 00:07:38.101 "avg_latency_us": 58533.276216628525, 00:07:38.101 "min_latency_us": 22681.154782608697, 00:07:38.101 "max_latency_us": 38295.819130434786 00:07:38.101 } 00:07:38.101 ], 00:07:38.101 "core_count": 1 00:07:38.101 } 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 530093 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 530093 ']' 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 530093 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 530093 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 530093' 00:07:38.101 killing process with pid 530093 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 530093 00:07:38.101 Received shutdown signal, test time was about 10.000000 seconds 00:07:38.101 00:07:38.101 Latency(us) 00:07:38.101 [2024-10-17T15:32:16.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.101 [2024-10-17T15:32:16.492Z] =================================================================================================================== 00:07:38.101 [2024-10-17T15:32:16.492Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:38.101 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 530093 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:38.360 rmmod nvme_rdma 00:07:38.360 rmmod nvme_fabrics 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 530042 ']' 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 530042 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 530042 ']' 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 530042 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 530042 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 530042' 00:07:38.360 killing process with pid 530042 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 530042 00:07:38.360 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 530042 00:07:38.618 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:38.618 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:38.618 00:07:38.618 real 0m18.220s 00:07:38.618 user 0m24.172s 00:07:38.618 sys 0m5.623s 00:07:38.618 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.618 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:38.618 ************************************ 00:07:38.618 END TEST nvmf_queue_depth 00:07:38.618 ************************************ 00:07:38.618 17:32:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:07:38.618 17:32:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:38.618 17:32:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.618 17:32:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:38.618 ************************************ 00:07:38.618 START TEST nvmf_target_multipath 00:07:38.618 ************************************ 00:07:38.618 17:32:16 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:07:38.878 * Looking for test storage... 00:07:38.878 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:38.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.878 --rc genhtml_branch_coverage=1 00:07:38.878 --rc genhtml_function_coverage=1 00:07:38.878 --rc genhtml_legend=1 00:07:38.878 --rc geninfo_all_blocks=1 00:07:38.878 --rc geninfo_unexecuted_blocks=1 00:07:38.878 00:07:38.878 ' 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:38.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.878 --rc genhtml_branch_coverage=1 00:07:38.878 --rc genhtml_function_coverage=1 00:07:38.878 --rc genhtml_legend=1 00:07:38.878 --rc geninfo_all_blocks=1 00:07:38.878 --rc geninfo_unexecuted_blocks=1 00:07:38.878 00:07:38.878 ' 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:38.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.878 --rc genhtml_branch_coverage=1 00:07:38.878 --rc genhtml_function_coverage=1 00:07:38.878 --rc genhtml_legend=1 00:07:38.878 --rc geninfo_all_blocks=1 00:07:38.878 --rc geninfo_unexecuted_blocks=1 00:07:38.878 00:07:38.878 ' 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:38.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.878 --rc genhtml_branch_coverage=1 00:07:38.878 --rc genhtml_function_coverage=1 00:07:38.878 --rc genhtml_legend=1 00:07:38.878 --rc geninfo_all_blocks=1 00:07:38.878 --rc geninfo_unexecuted_blocks=1 00:07:38.878 00:07:38.878 ' 00:07:38.878 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:38.879 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:07:38.879 17:32:17 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:07:45.439 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:07:45.439 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:45.439 Found net devices under 0000:18:00.0: mlx_0_0 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:45.439 Found net devices under 0000:18:00.1: mlx_0_1 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # rdma_device_init 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # uname 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:45.439 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:45.440 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:45.440 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:07:45.440 altname enp24s0f0np0 00:07:45.440 altname ens785f0np0 00:07:45.440 inet 192.168.100.8/24 scope global mlx_0_0 00:07:45.440 valid_lft forever preferred_lft forever 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:45.440 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:45.440 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:07:45.440 altname enp24s0f1np1 00:07:45.440 altname ens785f1np1 00:07:45.440 inet 192.168.100.9/24 scope global mlx_0_1 00:07:45.440 valid_lft forever preferred_lft forever 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@109 -- # continue 2 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:45.440 192.168.100.9' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:45.440 192.168.100.9' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # head -n 1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:45.440 192.168.100.9' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # tail -n +2 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # head -n 1 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:07:45.440 run this test only with TCP transport for now 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:45.440 rmmod nvme_rdma 00:07:45.440 rmmod nvme_fabrics 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:45.440 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:07:45.441 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:45.441 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:45.441 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:07:45.441 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:45.441 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:45.441 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:45.699 00:07:45.699 real 0m6.893s 00:07:45.699 user 0m2.039s 00:07:45.699 sys 0m5.068s 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:07:45.699 ************************************ 00:07:45.699 END TEST nvmf_target_multipath 00:07:45.699 ************************************ 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.699 ************************************ 00:07:45.699 START TEST nvmf_zcopy 00:07:45.699 ************************************ 00:07:45.699 17:32:23 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:07:45.699 * Looking for test storage... 00:07:45.699 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:07:45.699 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.958 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:07:45.958 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.958 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:07:45.958 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:07:45.958 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:45.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.959 --rc genhtml_branch_coverage=1 00:07:45.959 --rc genhtml_function_coverage=1 00:07:45.959 --rc genhtml_legend=1 00:07:45.959 --rc geninfo_all_blocks=1 00:07:45.959 --rc geninfo_unexecuted_blocks=1 00:07:45.959 00:07:45.959 ' 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:45.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.959 --rc genhtml_branch_coverage=1 00:07:45.959 --rc genhtml_function_coverage=1 00:07:45.959 --rc genhtml_legend=1 00:07:45.959 --rc geninfo_all_blocks=1 00:07:45.959 --rc geninfo_unexecuted_blocks=1 00:07:45.959 00:07:45.959 ' 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:45.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.959 --rc genhtml_branch_coverage=1 00:07:45.959 --rc genhtml_function_coverage=1 00:07:45.959 --rc genhtml_legend=1 00:07:45.959 --rc geninfo_all_blocks=1 00:07:45.959 --rc geninfo_unexecuted_blocks=1 00:07:45.959 00:07:45.959 ' 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:45.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.959 --rc genhtml_branch_coverage=1 00:07:45.959 --rc genhtml_function_coverage=1 00:07:45.959 --rc genhtml_legend=1 00:07:45.959 --rc geninfo_all_blocks=1 00:07:45.959 --rc geninfo_unexecuted_blocks=1 00:07:45.959 00:07:45.959 ' 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.959 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:07:45.959 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:07:45.960 17:32:24 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:07:52.525 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:07:52.525 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:07:52.525 Found net devices under 0000:18:00.0: mlx_0_0 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:07:52.525 Found net devices under 0000:18:00.1: mlx_0_1 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # rdma_device_init 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # uname 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe ib_cm 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe ib_core 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe ib_umad 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@70 -- # modprobe iw_cm 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@528 -- # allocate_nic_ips 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # get_rdma_if_list 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.525 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:07:52.526 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.526 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:07:52.526 altname enp24s0f0np0 00:07:52.526 altname ens785f0np0 00:07:52.526 inet 192.168.100.8/24 scope global mlx_0_0 00:07:52.526 valid_lft forever preferred_lft forever 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:07:52.526 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:52.526 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:07:52.526 altname enp24s0f1np1 00:07:52.526 altname ens785f1np1 00:07:52.526 inet 192.168.100.9/24 scope global mlx_0_1 00:07:52.526 valid_lft forever preferred_lft forever 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # get_rdma_if_list 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_0 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@108 -- # echo mlx_0_1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@109 -- # continue 2 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # cut -d/ -f1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # awk '{print $4}' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:07:52.526 192.168.100.9' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # head -n 1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:07:52.526 192.168.100.9' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # head -n 1 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:07:52.526 192.168.100.9' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # tail -n +2 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=537529 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 537529 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 537529 ']' 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.526 17:32:30 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:52.526 [2024-10-17 17:32:30.815407] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:07:52.526 [2024-10-17 17:32:30.815478] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.526 [2024-10-17 17:32:30.889398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.784 [2024-10-17 17:32:30.934047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.784 [2024-10-17 17:32:30.934086] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.784 [2024-10-17 17:32:30.934096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.784 [2024-10-17 17:32:30.934108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.784 [2024-10-17 17:32:30.934115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.784 [2024-10-17 17:32:30.934594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:07:52.784 Unsupported transport: rdma 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:52.784 nvmf_trace.0 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:52.784 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:07:52.784 rmmod nvme_rdma 00:07:52.784 rmmod nvme_fabrics 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 537529 ']' 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 537529 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 537529 ']' 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 537529 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 537529 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 537529' 00:07:53.041 killing process with pid 537529 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 537529 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 537529 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:07:53.041 00:07:53.041 real 0m7.509s 00:07:53.041 user 0m2.712s 00:07:53.041 sys 0m5.402s 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.041 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:07:53.041 ************************************ 00:07:53.041 END TEST nvmf_zcopy 00:07:53.041 ************************************ 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.298 ************************************ 00:07:53.298 START TEST nvmf_nmic 00:07:53.298 ************************************ 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:07:53.298 * Looking for test storage... 00:07:53.298 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.298 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.299 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.299 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:07:53.299 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.299 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:53.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.299 --rc genhtml_branch_coverage=1 00:07:53.299 --rc genhtml_function_coverage=1 00:07:53.299 --rc genhtml_legend=1 00:07:53.299 --rc geninfo_all_blocks=1 00:07:53.299 --rc geninfo_unexecuted_blocks=1 00:07:53.299 00:07:53.299 ' 00:07:53.299 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:53.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.299 --rc genhtml_branch_coverage=1 00:07:53.299 --rc genhtml_function_coverage=1 00:07:53.299 --rc genhtml_legend=1 00:07:53.299 --rc geninfo_all_blocks=1 00:07:53.299 --rc geninfo_unexecuted_blocks=1 00:07:53.299 00:07:53.299 ' 00:07:53.299 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:53.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.299 --rc genhtml_branch_coverage=1 00:07:53.299 --rc genhtml_function_coverage=1 00:07:53.299 --rc genhtml_legend=1 00:07:53.299 --rc geninfo_all_blocks=1 00:07:53.299 --rc geninfo_unexecuted_blocks=1 00:07:53.299 00:07:53.299 ' 00:07:53.299 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:53.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.299 --rc genhtml_branch_coverage=1 00:07:53.299 --rc genhtml_function_coverage=1 00:07:53.299 --rc genhtml_legend=1 00:07:53.299 --rc geninfo_all_blocks=1 00:07:53.299 --rc geninfo_unexecuted_blocks=1 00:07:53.299 00:07:53.299 ' 00:07:53.299 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.557 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:07:53.557 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.558 17:32:31 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:08:00.118 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:08:00.118 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:00.118 Found net devices under 0000:18:00.0: mlx_0_0 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:00.118 Found net devices under 0000:18:00.1: mlx_0_1 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # rdma_device_init 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # uname 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:00.118 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:00.119 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:00.119 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:08:00.119 altname enp24s0f0np0 00:08:00.119 altname ens785f0np0 00:08:00.119 inet 192.168.100.8/24 scope global mlx_0_0 00:08:00.119 valid_lft forever preferred_lft forever 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:00.119 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:00.119 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:08:00.119 altname enp24s0f1np1 00:08:00.119 altname ens785f1np1 00:08:00.119 inet 192.168.100.9/24 scope global mlx_0_1 00:08:00.119 valid_lft forever preferred_lft forever 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@109 -- # continue 2 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:00.119 192.168.100.9' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:00.119 192.168.100.9' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # head -n 1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:00.119 192.168.100.9' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # tail -n +2 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # head -n 1 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=540533 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 540533 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 540533 ']' 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.119 17:32:37 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.119 [2024-10-17 17:32:37.978584] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:08:00.119 [2024-10-17 17:32:37.978643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.119 [2024-10-17 17:32:38.050502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:00.119 [2024-10-17 17:32:38.097142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.119 [2024-10-17 17:32:38.097184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.119 [2024-10-17 17:32:38.097194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.119 [2024-10-17 17:32:38.097203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.119 [2024-10-17 17:32:38.097210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.119 [2024-10-17 17:32:38.098564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.119 [2024-10-17 17:32:38.098652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.119 [2024-10-17 17:32:38.098729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.119 [2024-10-17 17:32:38.098730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.119 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:00.119 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:00.119 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:00.119 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:00.119 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.119 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.119 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:00.119 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.119 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.119 [2024-10-17 17:32:38.274604] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6fe2c0/0x7027b0) succeed. 00:08:00.120 [2024-10-17 17:32:38.285157] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6ff950/0x743e50) succeed. 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 Malloc0 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 [2024-10-17 17:32:38.475268] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:00.120 test case1: single bdev can't be used in multiple subsystems 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.120 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.120 [2024-10-17 17:32:38.503165] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:00.120 [2024-10-17 17:32:38.503185] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:00.120 [2024-10-17 17:32:38.503195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:00.378 request: 00:08:00.378 { 00:08:00.378 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:00.378 "namespace": { 00:08:00.378 "bdev_name": "Malloc0", 00:08:00.378 "no_auto_visible": false 00:08:00.378 }, 00:08:00.378 "method": "nvmf_subsystem_add_ns", 00:08:00.378 "req_id": 1 00:08:00.378 } 00:08:00.378 Got JSON-RPC error response 00:08:00.378 response: 00:08:00.378 { 00:08:00.378 "code": -32602, 00:08:00.378 "message": "Invalid parameters" 00:08:00.378 } 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:00.378 Adding namespace failed - expected result. 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:00.378 test case2: host connect to nvmf target in multiple paths 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:00.378 [2024-10-17 17:32:38.519240] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:00.378 17:32:38 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:01.752 17:32:40 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:08:03.813 17:32:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.813 17:32:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:03.813 17:32:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.813 17:32:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:03.813 17:32:41 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:05.713 17:32:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:05.713 17:32:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:05.713 17:32:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:05.713 17:32:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:05.713 17:32:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:05.713 17:32:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:05.713 17:32:43 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:05.713 [global] 00:08:05.713 thread=1 00:08:05.713 invalidate=1 00:08:05.713 rw=write 00:08:05.713 time_based=1 00:08:05.713 runtime=1 00:08:05.713 ioengine=libaio 00:08:05.713 direct=1 00:08:05.713 bs=4096 00:08:05.713 iodepth=1 00:08:05.713 norandommap=0 00:08:05.713 numjobs=1 00:08:05.713 00:08:05.713 verify_dump=1 00:08:05.713 verify_backlog=512 00:08:05.713 verify_state_save=0 00:08:05.713 do_verify=1 00:08:05.713 verify=crc32c-intel 00:08:05.713 [job0] 00:08:05.713 filename=/dev/nvme0n1 00:08:05.713 Could not set queue depth (nvme0n1) 00:08:05.713 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:05.713 fio-3.35 00:08:05.713 Starting 1 thread 00:08:07.087 00:08:07.087 job0: (groupid=0, jobs=1): err= 0: pid=541418: Thu Oct 17 17:32:45 2024 00:08:07.088 read: IOPS=6780, BW=26.5MiB/s (27.8MB/s)(26.5MiB/1001msec) 00:08:07.088 slat (nsec): min=8632, max=28226, avg=9233.19, stdev=981.92 00:08:07.088 clat (nsec): min=44708, max=83349, avg=59387.93, stdev=3781.26 00:08:07.088 lat (usec): min=59, max=102, avg=68.62, stdev= 3.91 00:08:07.088 clat percentiles (nsec): 00:08:07.088 | 1.00th=[52480], 5.00th=[54016], 10.00th=[55040], 20.00th=[56064], 00:08:07.088 | 30.00th=[57088], 40.00th=[58112], 50.00th=[59136], 60.00th=[60160], 00:08:07.088 | 70.00th=[61184], 80.00th=[62208], 90.00th=[64256], 95.00th=[66048], 00:08:07.088 | 99.00th=[71168], 99.50th=[73216], 99.90th=[77312], 99.95th=[77312], 00:08:07.088 | 99.99th=[83456] 00:08:07.088 write: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:08:07.088 slat (nsec): min=11153, max=41768, avg=12212.86, stdev=1276.97 00:08:07.088 clat (usec): min=42, max=107, avg=56.89, stdev= 3.83 00:08:07.088 lat (usec): min=59, max=119, avg=69.10, stdev= 4.06 00:08:07.088 clat percentiles (usec): 00:08:07.088 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:08:07.088 | 30.00th=[ 55], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:08:07.088 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 62], 95.00th=[ 64], 00:08:07.088 | 99.00th=[ 68], 99.50th=[ 71], 99.90th=[ 77], 99.95th=[ 78], 00:08:07.088 | 99.99th=[ 109] 00:08:07.088 bw ( KiB/s): min=28672, max=28672, per=100.00%, avg=28672.00, stdev= 0.00, samples=1 00:08:07.088 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:08:07.088 lat (usec) : 50=0.52%, 100=99.48%, 250=0.01% 00:08:07.088 cpu : usr=11.00%, sys=13.40%, ctx=13955, majf=0, minf=1 00:08:07.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:07.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:07.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:07.088 issued rwts: total=6787,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:07.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:07.088 00:08:07.088 Run status group 0 (all jobs): 00:08:07.088 READ: bw=26.5MiB/s (27.8MB/s), 26.5MiB/s-26.5MiB/s (27.8MB/s-27.8MB/s), io=26.5MiB (27.8MB), run=1001-1001msec 00:08:07.088 WRITE: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:08:07.088 00:08:07.088 Disk stats (read/write): 00:08:07.088 nvme0n1: ios=6194/6402, merge=0/0, ticks=326/300, in_queue=626, util=90.78% 00:08:07.088 17:32:45 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:13.648 rmmod nvme_rdma 00:08:13.648 rmmod nvme_fabrics 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 540533 ']' 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 540533 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 540533 ']' 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 540533 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 540533 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 540533' 00:08:13.648 killing process with pid 540533 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 540533 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 540533 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:08:13.648 00:08:13.648 real 0m20.488s 00:08:13.648 user 0m59.576s 00:08:13.648 sys 0m5.904s 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.648 17:32:51 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:13.648 ************************************ 00:08:13.648 END TEST nvmf_nmic 00:08:13.648 ************************************ 00:08:13.648 17:32:52 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:13.648 17:32:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:13.648 17:32:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.648 17:32:52 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.908 ************************************ 00:08:13.908 START TEST nvmf_fio_target 00:08:13.908 ************************************ 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:08:13.908 * Looking for test storage... 00:08:13.908 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:13.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.908 --rc genhtml_branch_coverage=1 00:08:13.908 --rc genhtml_function_coverage=1 00:08:13.908 --rc genhtml_legend=1 00:08:13.908 --rc geninfo_all_blocks=1 00:08:13.908 --rc geninfo_unexecuted_blocks=1 00:08:13.908 00:08:13.908 ' 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:13.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.908 --rc genhtml_branch_coverage=1 00:08:13.908 --rc genhtml_function_coverage=1 00:08:13.908 --rc genhtml_legend=1 00:08:13.908 --rc geninfo_all_blocks=1 00:08:13.908 --rc geninfo_unexecuted_blocks=1 00:08:13.908 00:08:13.908 ' 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:13.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.908 --rc genhtml_branch_coverage=1 00:08:13.908 --rc genhtml_function_coverage=1 00:08:13.908 --rc genhtml_legend=1 00:08:13.908 --rc geninfo_all_blocks=1 00:08:13.908 --rc geninfo_unexecuted_blocks=1 00:08:13.908 00:08:13.908 ' 00:08:13.908 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:13.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.908 --rc genhtml_branch_coverage=1 00:08:13.908 --rc genhtml_function_coverage=1 00:08:13.908 --rc genhtml_legend=1 00:08:13.908 --rc geninfo_all_blocks=1 00:08:13.909 --rc geninfo_unexecuted_blocks=1 00:08:13.909 00:08:13.909 ' 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.909 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.909 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.244 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:14.244 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:14.244 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:14.244 17:32:52 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:08:20.811 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:08:20.811 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:20.811 Found net devices under 0000:18:00.0: mlx_0_0 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:20.811 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:20.812 Found net devices under 0000:18:00.1: mlx_0_1 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # rdma_device_init 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # uname 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:20.812 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:20.812 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:08:20.812 altname enp24s0f0np0 00:08:20.812 altname ens785f0np0 00:08:20.812 inet 192.168.100.8/24 scope global mlx_0_0 00:08:20.812 valid_lft forever preferred_lft forever 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:20.812 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:20.812 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:08:20.812 altname enp24s0f1np1 00:08:20.812 altname ens785f1np1 00:08:20.812 inet 192.168.100.9/24 scope global mlx_0_1 00:08:20.812 valid_lft forever preferred_lft forever 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@109 -- # continue 2 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:20.812 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:20.813 192.168.100.9' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:20.813 192.168.100.9' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # head -n 1 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # tail -n +2 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # head -n 1 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:20.813 192.168.100.9' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=545429 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 545429 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 545429 ']' 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.813 [2024-10-17 17:32:58.723238] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:08:20.813 [2024-10-17 17:32:58.723306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.813 [2024-10-17 17:32:58.798707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.813 [2024-10-17 17:32:58.843970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.813 [2024-10-17 17:32:58.844012] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.813 [2024-10-17 17:32:58.844022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:20.813 [2024-10-17 17:32:58.844031] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:20.813 [2024-10-17 17:32:58.844038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.813 [2024-10-17 17:32:58.845334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.813 [2024-10-17 17:32:58.845435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.813 [2024-10-17 17:32:58.845478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.813 [2024-10-17 17:32:58.845480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.813 17:32:58 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:20.813 [2024-10-17 17:32:59.183077] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d922c0/0x1d967b0) succeed. 00:08:20.813 [2024-10-17 17:32:59.193660] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d93950/0x1dd7e50) succeed. 00:08:21.073 17:32:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.332 17:32:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:21.332 17:32:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.591 17:32:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:21.591 17:32:59 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:21.850 17:33:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:21.850 17:33:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.109 17:33:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:22.109 17:33:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:22.109 17:33:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.368 17:33:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:22.368 17:33:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.627 17:33:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:22.627 17:33:00 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:22.887 17:33:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:22.887 17:33:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:23.145 17:33:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:23.404 17:33:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:23.404 17:33:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:23.663 17:33:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:23.663 17:33:01 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:23.663 17:33:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:23.922 [2024-10-17 17:33:02.186565] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:23.922 17:33:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:24.181 17:33:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:24.440 17:33:02 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:25.818 17:33:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:25.818 17:33:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:08:25.818 17:33:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:25.818 17:33:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:08:25.818 17:33:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:08:25.818 17:33:04 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:08:28.355 17:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:28.355 17:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:28.355 17:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:28.355 17:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:08:28.355 17:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:28.355 17:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:08:28.355 17:33:06 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:28.355 [global] 00:08:28.355 thread=1 00:08:28.355 invalidate=1 00:08:28.355 rw=write 00:08:28.355 time_based=1 00:08:28.355 runtime=1 00:08:28.355 ioengine=libaio 00:08:28.355 direct=1 00:08:28.355 bs=4096 00:08:28.355 iodepth=1 00:08:28.355 norandommap=0 00:08:28.355 numjobs=1 00:08:28.355 00:08:28.355 verify_dump=1 00:08:28.355 verify_backlog=512 00:08:28.355 verify_state_save=0 00:08:28.355 do_verify=1 00:08:28.355 verify=crc32c-intel 00:08:28.355 [job0] 00:08:28.355 filename=/dev/nvme0n1 00:08:28.355 [job1] 00:08:28.355 filename=/dev/nvme0n2 00:08:28.355 [job2] 00:08:28.355 filename=/dev/nvme0n3 00:08:28.355 [job3] 00:08:28.355 filename=/dev/nvme0n4 00:08:28.355 Could not set queue depth (nvme0n1) 00:08:28.355 Could not set queue depth (nvme0n2) 00:08:28.355 Could not set queue depth (nvme0n3) 00:08:28.355 Could not set queue depth (nvme0n4) 00:08:28.355 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.355 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.355 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.355 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:28.355 fio-3.35 00:08:28.355 Starting 4 threads 00:08:29.735 00:08:29.735 job0: (groupid=0, jobs=1): err= 0: pid=547101: Thu Oct 17 17:33:07 2024 00:08:29.735 read: IOPS=4482, BW=17.5MiB/s (18.4MB/s)(17.5MiB/1001msec) 00:08:29.735 slat (nsec): min=8601, max=44676, avg=9180.91, stdev=1239.66 00:08:29.735 clat (usec): min=66, max=289, avg=99.72, stdev=16.46 00:08:29.735 lat (usec): min=75, max=298, avg=108.90, stdev=16.49 00:08:29.735 clat percentiles (usec): 00:08:29.735 | 1.00th=[ 73], 5.00th=[ 75], 10.00th=[ 78], 20.00th=[ 81], 00:08:29.735 | 30.00th=[ 87], 40.00th=[ 98], 50.00th=[ 103], 60.00th=[ 108], 00:08:29.735 | 70.00th=[ 112], 80.00th=[ 115], 90.00th=[ 119], 95.00th=[ 123], 00:08:29.735 | 99.00th=[ 131], 99.50th=[ 135], 99.90th=[ 153], 99.95th=[ 157], 00:08:29.735 | 99.99th=[ 289] 00:08:29.735 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:08:29.735 slat (nsec): min=11017, max=46832, avg=12184.39, stdev=1320.81 00:08:29.735 clat (usec): min=61, max=157, avg=93.20, stdev=15.39 00:08:29.735 lat (usec): min=73, max=182, avg=105.39, stdev=15.50 00:08:29.736 clat percentiles (usec): 00:08:29.736 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 77], 00:08:29.736 | 30.00th=[ 81], 40.00th=[ 91], 50.00th=[ 96], 60.00th=[ 100], 00:08:29.736 | 70.00th=[ 103], 80.00th=[ 108], 90.00th=[ 113], 95.00th=[ 117], 00:08:29.736 | 99.00th=[ 124], 99.50th=[ 129], 99.90th=[ 149], 99.95th=[ 151], 00:08:29.736 | 99.99th=[ 157] 00:08:29.736 bw ( KiB/s): min=20480, max=20480, per=28.86%, avg=20480.00, stdev= 0.00, samples=1 00:08:29.736 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:08:29.736 lat (usec) : 100=52.64%, 250=47.34%, 500=0.01% 00:08:29.736 cpu : usr=5.00%, sys=11.20%, ctx=9096, majf=0, minf=1 00:08:29.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.736 issued rwts: total=4487,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.736 job1: (groupid=0, jobs=1): err= 0: pid=547106: Thu Oct 17 17:33:07 2024 00:08:29.736 read: IOPS=3757, BW=14.7MiB/s (15.4MB/s)(14.7MiB/1001msec) 00:08:29.736 slat (nsec): min=8585, max=26643, avg=9462.76, stdev=1098.48 00:08:29.736 clat (usec): min=71, max=210, avg=117.68, stdev=17.31 00:08:29.736 lat (usec): min=80, max=219, avg=127.15, stdev=17.36 00:08:29.736 clat percentiles (usec): 00:08:29.736 | 1.00th=[ 91], 5.00th=[ 96], 10.00th=[ 99], 20.00th=[ 104], 00:08:29.736 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 115], 60.00th=[ 118], 00:08:29.736 | 70.00th=[ 124], 80.00th=[ 135], 90.00th=[ 143], 95.00th=[ 149], 00:08:29.736 | 99.00th=[ 172], 99.50th=[ 188], 99.90th=[ 204], 99.95th=[ 208], 00:08:29.736 | 99.99th=[ 210] 00:08:29.736 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:08:29.736 slat (nsec): min=10776, max=49753, avg=12099.13, stdev=1752.24 00:08:29.736 clat (usec): min=67, max=200, avg=110.15, stdev=15.59 00:08:29.736 lat (usec): min=78, max=212, avg=122.25, stdev=15.74 00:08:29.736 clat percentiles (usec): 00:08:29.736 | 1.00th=[ 86], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 98], 00:08:29.736 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 112], 00:08:29.736 | 70.00th=[ 116], 80.00th=[ 123], 90.00th=[ 133], 95.00th=[ 139], 00:08:29.736 | 99.00th=[ 159], 99.50th=[ 174], 99.90th=[ 190], 99.95th=[ 190], 00:08:29.736 | 99.99th=[ 200] 00:08:29.736 bw ( KiB/s): min=16384, max=16384, per=23.09%, avg=16384.00, stdev= 0.00, samples=1 00:08:29.736 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:29.736 lat (usec) : 100=20.08%, 250=79.92% 00:08:29.736 cpu : usr=6.40%, sys=10.80%, ctx=7857, majf=0, minf=1 00:08:29.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.736 issued rwts: total=3761,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.736 job2: (groupid=0, jobs=1): err= 0: pid=547108: Thu Oct 17 17:33:07 2024 00:08:29.736 read: IOPS=4209, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1002msec) 00:08:29.736 slat (nsec): min=8895, max=29127, avg=9612.93, stdev=917.56 00:08:29.736 clat (usec): min=78, max=278, avg=101.47, stdev=14.35 00:08:29.736 lat (usec): min=88, max=287, avg=111.09, stdev=14.41 00:08:29.736 clat percentiles (usec): 00:08:29.736 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 95], 00:08:29.736 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 101], 00:08:29.736 | 70.00th=[ 103], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 115], 00:08:29.736 | 99.00th=[ 180], 99.50th=[ 200], 99.90th=[ 241], 99.95th=[ 260], 00:08:29.736 | 99.99th=[ 277] 00:08:29.736 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:08:29.736 slat (nsec): min=7575, max=48316, avg=12672.25, stdev=1552.16 00:08:29.736 clat (usec): min=74, max=289, avg=97.32, stdev=16.22 00:08:29.736 lat (usec): min=87, max=302, avg=109.99, stdev=16.25 00:08:29.736 clat percentiles (usec): 00:08:29.736 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:08:29.736 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 96], 00:08:29.736 | 70.00th=[ 98], 80.00th=[ 100], 90.00th=[ 106], 95.00th=[ 126], 00:08:29.736 | 99.00th=[ 180], 99.50th=[ 200], 99.90th=[ 231], 99.95th=[ 253], 00:08:29.736 | 99.99th=[ 289] 00:08:29.736 bw ( KiB/s): min=19104, max=19104, per=26.92%, avg=19104.00, stdev= 0.00, samples=1 00:08:29.736 iops : min= 4776, max= 4776, avg=4776.00, stdev= 0.00, samples=1 00:08:29.736 lat (usec) : 100=67.37%, 250=32.56%, 500=0.07% 00:08:29.736 cpu : usr=6.89%, sys=9.19%, ctx=8827, majf=0, minf=1 00:08:29.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.736 issued rwts: total=4218,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.736 job3: (groupid=0, jobs=1): err= 0: pid=547111: Thu Oct 17 17:33:07 2024 00:08:29.736 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:08:29.736 slat (nsec): min=8912, max=29283, avg=9546.35, stdev=974.62 00:08:29.736 clat (usec): min=78, max=197, avg=107.40, stdev=20.31 00:08:29.736 lat (usec): min=87, max=207, avg=116.95, stdev=20.27 00:08:29.736 clat percentiles (usec): 00:08:29.736 | 1.00th=[ 85], 5.00th=[ 89], 10.00th=[ 90], 20.00th=[ 93], 00:08:29.736 | 30.00th=[ 95], 40.00th=[ 97], 50.00th=[ 99], 60.00th=[ 101], 00:08:29.736 | 70.00th=[ 106], 80.00th=[ 133], 90.00th=[ 143], 95.00th=[ 149], 00:08:29.736 | 99.00th=[ 159], 99.50th=[ 176], 99.90th=[ 192], 99.95th=[ 194], 00:08:29.736 | 99.99th=[ 198] 00:08:29.736 write: IOPS=4460, BW=17.4MiB/s (18.3MB/s)(17.4MiB/1001msec); 0 zone resets 00:08:29.736 slat (nsec): min=11100, max=42868, avg=12523.74, stdev=1281.54 00:08:29.736 clat (usec): min=73, max=187, avg=98.66, stdev=16.56 00:08:29.736 lat (usec): min=85, max=199, avg=111.19, stdev=16.51 00:08:29.736 clat percentiles (usec): 00:08:29.736 | 1.00th=[ 79], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 88], 00:08:29.736 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 96], 00:08:29.736 | 70.00th=[ 99], 80.00th=[ 106], 90.00th=[ 129], 95.00th=[ 135], 00:08:29.736 | 99.00th=[ 147], 99.50th=[ 159], 99.90th=[ 178], 99.95th=[ 182], 00:08:29.736 | 99.99th=[ 188] 00:08:29.736 bw ( KiB/s): min=16384, max=16384, per=23.09%, avg=16384.00, stdev= 0.00, samples=1 00:08:29.736 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:29.736 lat (usec) : 100=64.40%, 250=35.60% 00:08:29.736 cpu : usr=7.10%, sys=8.50%, ctx=8562, majf=0, minf=1 00:08:29.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:29.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:29.736 issued rwts: total=4096,4465,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:29.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:29.736 00:08:29.736 Run status group 0 (all jobs): 00:08:29.736 READ: bw=64.6MiB/s (67.7MB/s), 14.7MiB/s-17.5MiB/s (15.4MB/s-18.4MB/s), io=64.7MiB (67.8MB), run=1001-1002msec 00:08:29.736 WRITE: bw=69.3MiB/s (72.7MB/s), 16.0MiB/s-18.0MiB/s (16.8MB/s-18.9MB/s), io=69.4MiB (72.8MB), run=1001-1002msec 00:08:29.736 00:08:29.736 Disk stats (read/write): 00:08:29.736 nvme0n1: ios=3789/4096, merge=0/0, ticks=341/345, in_queue=686, util=85.97% 00:08:29.736 nvme0n2: ios=3072/3526, merge=0/0, ticks=341/353, in_queue=694, util=86.57% 00:08:29.736 nvme0n3: ios=3584/3920, merge=0/0, ticks=331/348, in_queue=679, util=88.92% 00:08:29.736 nvme0n4: ios=3583/3584, merge=0/0, ticks=369/325, in_queue=694, util=89.67% 00:08:29.736 17:33:07 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:29.736 [global] 00:08:29.736 thread=1 00:08:29.736 invalidate=1 00:08:29.736 rw=randwrite 00:08:29.736 time_based=1 00:08:29.736 runtime=1 00:08:29.736 ioengine=libaio 00:08:29.736 direct=1 00:08:29.736 bs=4096 00:08:29.736 iodepth=1 00:08:29.736 norandommap=0 00:08:29.736 numjobs=1 00:08:29.736 00:08:29.736 verify_dump=1 00:08:29.736 verify_backlog=512 00:08:29.736 verify_state_save=0 00:08:29.736 do_verify=1 00:08:29.736 verify=crc32c-intel 00:08:29.736 [job0] 00:08:29.736 filename=/dev/nvme0n1 00:08:29.736 [job1] 00:08:29.736 filename=/dev/nvme0n2 00:08:29.736 [job2] 00:08:29.736 filename=/dev/nvme0n3 00:08:29.736 [job3] 00:08:29.736 filename=/dev/nvme0n4 00:08:29.736 Could not set queue depth (nvme0n1) 00:08:29.736 Could not set queue depth (nvme0n2) 00:08:29.736 Could not set queue depth (nvme0n3) 00:08:29.736 Could not set queue depth (nvme0n4) 00:08:29.993 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.993 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.993 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.993 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:29.993 fio-3.35 00:08:29.993 Starting 4 threads 00:08:31.366 00:08:31.366 job0: (groupid=0, jobs=1): err= 0: pid=547475: Thu Oct 17 17:33:09 2024 00:08:31.366 read: IOPS=3362, BW=13.1MiB/s (13.8MB/s)(13.1MiB/1001msec) 00:08:31.366 slat (nsec): min=8772, max=21384, avg=9294.03, stdev=812.36 00:08:31.366 clat (usec): min=80, max=316, avg=134.38, stdev=19.31 00:08:31.366 lat (usec): min=89, max=325, avg=143.67, stdev=19.30 00:08:31.366 clat percentiles (usec): 00:08:31.366 | 1.00th=[ 97], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 124], 00:08:31.366 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 133], 00:08:31.366 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 163], 00:08:31.366 | 99.00th=[ 227], 99.50th=[ 243], 99.90th=[ 277], 99.95th=[ 302], 00:08:31.366 | 99.99th=[ 318] 00:08:31.366 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:31.366 slat (nsec): min=9717, max=60827, avg=11848.83, stdev=1589.31 00:08:31.366 clat (usec): min=76, max=331, avg=127.17, stdev=20.79 00:08:31.366 lat (usec): min=88, max=343, avg=139.02, stdev=20.78 00:08:31.366 clat percentiles (usec): 00:08:31.366 | 1.00th=[ 88], 5.00th=[ 112], 10.00th=[ 114], 20.00th=[ 117], 00:08:31.366 | 30.00th=[ 119], 40.00th=[ 121], 50.00th=[ 123], 60.00th=[ 125], 00:08:31.366 | 70.00th=[ 128], 80.00th=[ 135], 90.00th=[ 147], 95.00th=[ 161], 00:08:31.366 | 99.00th=[ 219], 99.50th=[ 243], 99.90th=[ 289], 99.95th=[ 302], 00:08:31.366 | 99.99th=[ 330] 00:08:31.366 bw ( KiB/s): min=16064, max=16064, per=26.07%, avg=16064.00, stdev= 0.00, samples=1 00:08:31.366 iops : min= 4016, max= 4016, avg=4016.00, stdev= 0.00, samples=1 00:08:31.366 lat (usec) : 100=1.42%, 250=98.19%, 500=0.39% 00:08:31.366 cpu : usr=5.20%, sys=6.90%, ctx=6950, majf=0, minf=1 00:08:31.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.366 issued rwts: total=3366,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.366 job1: (groupid=0, jobs=1): err= 0: pid=547489: Thu Oct 17 17:33:09 2024 00:08:31.366 read: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec) 00:08:31.366 slat (nsec): min=4346, max=33721, avg=9302.66, stdev=1277.28 00:08:31.366 clat (usec): min=66, max=162, avg=111.13, stdev=16.27 00:08:31.366 lat (usec): min=71, max=171, avg=120.44, stdev=16.56 00:08:31.366 clat percentiles (usec): 00:08:31.366 | 1.00th=[ 73], 5.00th=[ 78], 10.00th=[ 81], 20.00th=[ 104], 00:08:31.366 | 30.00th=[ 111], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:08:31.366 | 70.00th=[ 121], 80.00th=[ 123], 90.00th=[ 127], 95.00th=[ 130], 00:08:31.366 | 99.00th=[ 137], 99.50th=[ 139], 99.90th=[ 155], 99.95th=[ 161], 00:08:31.366 | 99.99th=[ 163] 00:08:31.366 write: IOPS=4179, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1002msec); 0 zone resets 00:08:31.366 slat (nsec): min=10774, max=58669, avg=11978.01, stdev=1430.55 00:08:31.366 clat (usec): min=61, max=147, avg=103.58, stdev=14.22 00:08:31.366 lat (usec): min=76, max=193, avg=115.56, stdev=14.26 00:08:31.366 clat percentiles (usec): 00:08:31.366 | 1.00th=[ 71], 5.00th=[ 75], 10.00th=[ 77], 20.00th=[ 96], 00:08:31.366 | 30.00th=[ 102], 40.00th=[ 105], 50.00th=[ 109], 60.00th=[ 111], 00:08:31.366 | 70.00th=[ 113], 80.00th=[ 115], 90.00th=[ 118], 95.00th=[ 121], 00:08:31.366 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 135], 99.95th=[ 139], 00:08:31.366 | 99.99th=[ 147] 00:08:31.366 bw ( KiB/s): min=16384, max=16384, per=26.59%, avg=16384.00, stdev= 0.00, samples=1 00:08:31.366 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:31.366 lat (usec) : 100=22.27%, 250=77.73% 00:08:31.366 cpu : usr=5.49%, sys=8.89%, ctx=8285, majf=0, minf=1 00:08:31.366 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.366 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.366 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.366 issued rwts: total=4096,4188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.366 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.366 job2: (groupid=0, jobs=1): err= 0: pid=547504: Thu Oct 17 17:33:09 2024 00:08:31.366 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:08:31.366 slat (nsec): min=8994, max=21567, avg=9555.87, stdev=947.08 00:08:31.366 clat (usec): min=77, max=245, avg=122.12, stdev=13.74 00:08:31.366 lat (usec): min=86, max=255, avg=131.67, stdev=13.78 00:08:31.366 clat percentiles (usec): 00:08:31.366 | 1.00th=[ 97], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 114], 00:08:31.366 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:08:31.366 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 143], 95.00th=[ 151], 00:08:31.366 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 208], 99.95th=[ 233], 00:08:31.366 | 99.99th=[ 245] 00:08:31.366 write: IOPS=4072, BW=15.9MiB/s (16.7MB/s)(15.9MiB/1001msec); 0 zone resets 00:08:31.366 slat (nsec): min=11026, max=43361, avg=12131.93, stdev=1403.62 00:08:31.366 clat (usec): min=73, max=228, avg=112.37, stdev=13.08 00:08:31.366 lat (usec): min=86, max=240, avg=124.51, stdev=13.15 00:08:31.366 clat percentiles (usec): 00:08:31.366 | 1.00th=[ 92], 5.00th=[ 98], 10.00th=[ 101], 20.00th=[ 104], 00:08:31.366 | 30.00th=[ 108], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 113], 00:08:31.366 | 70.00th=[ 115], 80.00th=[ 117], 90.00th=[ 123], 95.00th=[ 147], 00:08:31.366 | 99.00th=[ 161], 99.50th=[ 163], 99.90th=[ 200], 99.95th=[ 208], 00:08:31.366 | 99.99th=[ 229] 00:08:31.366 bw ( KiB/s): min=16384, max=16384, per=26.59%, avg=16384.00, stdev= 0.00, samples=1 00:08:31.366 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:31.366 lat (usec) : 100=4.56%, 250=95.44% 00:08:31.366 cpu : usr=5.10%, sys=8.50%, ctx=7661, majf=0, minf=1 00:08:31.367 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.367 issued rwts: total=3584,4077,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.367 job3: (groupid=0, jobs=1): err= 0: pid=547510: Thu Oct 17 17:33:09 2024 00:08:31.367 read: IOPS=3504, BW=13.7MiB/s (14.4MB/s)(13.7MiB/1001msec) 00:08:31.367 slat (nsec): min=9018, max=34867, avg=9573.76, stdev=1237.28 00:08:31.367 clat (usec): min=81, max=218, avg=131.34, stdev=11.71 00:08:31.367 lat (usec): min=91, max=228, avg=140.92, stdev=11.70 00:08:31.367 clat percentiles (usec): 00:08:31.367 | 1.00th=[ 97], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 124], 00:08:31.367 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 131], 60.00th=[ 133], 00:08:31.367 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 147], 95.00th=[ 153], 00:08:31.367 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 184], 99.95th=[ 208], 00:08:31.367 | 99.99th=[ 219] 00:08:31.367 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:08:31.367 slat (nsec): min=10985, max=61218, avg=12089.57, stdev=1426.93 00:08:31.367 clat (usec): min=76, max=260, avg=123.75, stdev=14.21 00:08:31.367 lat (usec): min=88, max=272, avg=135.84, stdev=14.29 00:08:31.367 clat percentiles (usec): 00:08:31.367 | 1.00th=[ 88], 5.00th=[ 109], 10.00th=[ 113], 20.00th=[ 116], 00:08:31.367 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 124], 00:08:31.367 | 70.00th=[ 127], 80.00th=[ 131], 90.00th=[ 143], 95.00th=[ 151], 00:08:31.367 | 99.00th=[ 167], 99.50th=[ 178], 99.90th=[ 206], 99.95th=[ 249], 00:08:31.367 | 99.99th=[ 262] 00:08:31.367 bw ( KiB/s): min=16384, max=16384, per=26.59%, avg=16384.00, stdev= 0.00, samples=1 00:08:31.367 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:31.367 lat (usec) : 100=2.50%, 250=97.49%, 500=0.01% 00:08:31.367 cpu : usr=4.40%, sys=8.10%, ctx=7092, majf=0, minf=1 00:08:31.367 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:31.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:31.367 issued rwts: total=3508,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:31.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:31.367 00:08:31.367 Run status group 0 (all jobs): 00:08:31.367 READ: bw=56.7MiB/s (59.5MB/s), 13.1MiB/s-16.0MiB/s (13.8MB/s-16.7MB/s), io=56.9MiB (59.6MB), run=1001-1002msec 00:08:31.367 WRITE: bw=60.2MiB/s (63.1MB/s), 14.0MiB/s-16.3MiB/s (14.7MB/s-17.1MB/s), io=60.3MiB (63.2MB), run=1001-1002msec 00:08:31.367 00:08:31.367 Disk stats (read/write): 00:08:31.367 nvme0n1: ios=2951/3072, merge=0/0, ticks=389/364, in_queue=753, util=85.37% 00:08:31.367 nvme0n2: ios=3136/3584, merge=0/0, ticks=337/371, in_queue=708, util=86.05% 00:08:31.367 nvme0n3: ios=3072/3529, merge=0/0, ticks=349/379, in_queue=728, util=88.76% 00:08:31.367 nvme0n4: ios=3006/3072, merge=0/0, ticks=376/343, in_queue=719, util=89.61% 00:08:31.367 17:33:09 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:31.367 [global] 00:08:31.367 thread=1 00:08:31.367 invalidate=1 00:08:31.367 rw=write 00:08:31.367 time_based=1 00:08:31.367 runtime=1 00:08:31.367 ioengine=libaio 00:08:31.367 direct=1 00:08:31.367 bs=4096 00:08:31.367 iodepth=128 00:08:31.367 norandommap=0 00:08:31.367 numjobs=1 00:08:31.367 00:08:31.367 verify_dump=1 00:08:31.367 verify_backlog=512 00:08:31.367 verify_state_save=0 00:08:31.367 do_verify=1 00:08:31.367 verify=crc32c-intel 00:08:31.367 [job0] 00:08:31.367 filename=/dev/nvme0n1 00:08:31.367 [job1] 00:08:31.367 filename=/dev/nvme0n2 00:08:31.367 [job2] 00:08:31.367 filename=/dev/nvme0n3 00:08:31.367 [job3] 00:08:31.367 filename=/dev/nvme0n4 00:08:31.367 Could not set queue depth (nvme0n1) 00:08:31.367 Could not set queue depth (nvme0n2) 00:08:31.367 Could not set queue depth (nvme0n3) 00:08:31.367 Could not set queue depth (nvme0n4) 00:08:31.367 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.367 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.367 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.367 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:31.367 fio-3.35 00:08:31.367 Starting 4 threads 00:08:32.740 00:08:32.740 job0: (groupid=0, jobs=1): err= 0: pid=547876: Thu Oct 17 17:33:10 2024 00:08:32.740 read: IOPS=6352, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1004msec) 00:08:32.740 slat (usec): min=2, max=5766, avg=72.11, stdev=346.46 00:08:32.740 clat (usec): min=2678, max=18913, avg=9703.43, stdev=3102.33 00:08:32.740 lat (usec): min=2843, max=19333, avg=9775.54, stdev=3113.94 00:08:32.740 clat percentiles (usec): 00:08:32.740 | 1.00th=[ 3785], 5.00th=[ 4686], 10.00th=[ 5800], 20.00th=[ 7111], 00:08:32.740 | 30.00th=[ 7898], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10159], 00:08:32.740 | 70.00th=[10814], 80.00th=[11863], 90.00th=[14353], 95.00th=[16057], 00:08:32.740 | 99.00th=[17433], 99.50th=[17957], 99.90th=[18220], 99.95th=[18482], 00:08:32.740 | 99.99th=[19006] 00:08:32.740 write: IOPS=6629, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1004msec); 0 zone resets 00:08:32.740 slat (usec): min=2, max=4993, avg=75.58, stdev=327.06 00:08:32.740 clat (usec): min=2590, max=21486, avg=9778.36, stdev=3683.42 00:08:32.740 lat (usec): min=2596, max=21491, avg=9853.94, stdev=3701.60 00:08:32.740 clat percentiles (usec): 00:08:32.740 | 1.00th=[ 3523], 5.00th=[ 5080], 10.00th=[ 5669], 20.00th=[ 6521], 00:08:32.740 | 30.00th=[ 7177], 40.00th=[ 8094], 50.00th=[ 9372], 60.00th=[10290], 00:08:32.740 | 70.00th=[11338], 80.00th=[12649], 90.00th=[14877], 95.00th=[17171], 00:08:32.740 | 99.00th=[20055], 99.50th=[20579], 99.90th=[20841], 99.95th=[20841], 00:08:32.740 | 99.99th=[21365] 00:08:32.740 bw ( KiB/s): min=24576, max=28672, per=25.10%, avg=26624.00, stdev=2896.31, samples=2 00:08:32.740 iops : min= 6144, max= 7168, avg=6656.00, stdev=724.08, samples=2 00:08:32.740 lat (msec) : 4=1.32%, 10=55.39%, 20=42.67%, 50=0.61% 00:08:32.740 cpu : usr=4.19%, sys=7.68%, ctx=1291, majf=0, minf=1 00:08:32.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:08:32.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.740 issued rwts: total=6378,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.740 job1: (groupid=0, jobs=1): err= 0: pid=547877: Thu Oct 17 17:33:10 2024 00:08:32.740 read: IOPS=7167, BW=28.0MiB/s (29.4MB/s)(28.1MiB/1004msec) 00:08:32.740 slat (usec): min=2, max=4563, avg=68.03, stdev=305.53 00:08:32.740 clat (usec): min=2776, max=18783, avg=8905.51, stdev=2887.58 00:08:32.740 lat (usec): min=4008, max=18792, avg=8973.54, stdev=2902.51 00:08:32.740 clat percentiles (usec): 00:08:32.740 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 5735], 20.00th=[ 6521], 00:08:32.740 | 30.00th=[ 7046], 40.00th=[ 7504], 50.00th=[ 7963], 60.00th=[ 8979], 00:08:32.740 | 70.00th=[10028], 80.00th=[11338], 90.00th=[13173], 95.00th=[14746], 00:08:32.740 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18744], 99.95th=[18744], 00:08:32.740 | 99.99th=[18744] 00:08:32.740 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:08:32.740 slat (usec): min=2, max=5560, avg=61.19, stdev=285.34 00:08:32.740 clat (usec): min=3129, max=16743, avg=8233.12, stdev=2668.27 00:08:32.740 lat (usec): min=3133, max=16750, avg=8294.31, stdev=2681.14 00:08:32.740 clat percentiles (usec): 00:08:32.740 | 1.00th=[ 3949], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 6128], 00:08:32.740 | 30.00th=[ 6652], 40.00th=[ 7242], 50.00th=[ 7570], 60.00th=[ 8225], 00:08:32.740 | 70.00th=[ 9241], 80.00th=[10552], 90.00th=[12387], 95.00th=[13435], 00:08:32.740 | 99.00th=[15270], 99.50th=[16450], 99.90th=[16581], 99.95th=[16581], 00:08:32.740 | 99.99th=[16712] 00:08:32.740 bw ( KiB/s): min=26912, max=33736, per=28.59%, avg=30324.00, stdev=4825.30, samples=2 00:08:32.740 iops : min= 6728, max= 8434, avg=7581.00, stdev=1206.32, samples=2 00:08:32.740 lat (msec) : 4=0.61%, 10=72.41%, 20=26.98% 00:08:32.740 cpu : usr=3.79%, sys=8.97%, ctx=1307, majf=0, minf=1 00:08:32.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:08:32.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.740 issued rwts: total=7196,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.740 job2: (groupid=0, jobs=1): err= 0: pid=547878: Thu Oct 17 17:33:10 2024 00:08:32.740 read: IOPS=5746, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1003msec) 00:08:32.740 slat (usec): min=2, max=5733, avg=80.64, stdev=357.93 00:08:32.740 clat (usec): min=2059, max=21022, avg=10734.22, stdev=3100.24 00:08:32.740 lat (usec): min=3091, max=21050, avg=10814.86, stdev=3116.22 00:08:32.740 clat percentiles (usec): 00:08:32.740 | 1.00th=[ 4424], 5.00th=[ 6128], 10.00th=[ 6980], 20.00th=[ 7963], 00:08:32.740 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11600], 00:08:32.740 | 70.00th=[12387], 80.00th=[13304], 90.00th=[15270], 95.00th=[16319], 00:08:32.740 | 99.00th=[17433], 99.50th=[19006], 99.90th=[19530], 99.95th=[20317], 00:08:32.740 | 99.99th=[21103] 00:08:32.740 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:08:32.740 slat (usec): min=2, max=5077, avg=82.13, stdev=344.58 00:08:32.740 clat (usec): min=3611, max=18256, avg=10607.64, stdev=2848.14 00:08:32.740 lat (usec): min=3726, max=19900, avg=10689.77, stdev=2862.07 00:08:32.740 clat percentiles (usec): 00:08:32.740 | 1.00th=[ 4883], 5.00th=[ 6194], 10.00th=[ 6980], 20.00th=[ 7963], 00:08:32.740 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11338], 00:08:32.740 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14615], 95.00th=[15270], 00:08:32.740 | 99.00th=[16319], 99.50th=[16909], 99.90th=[18220], 99.95th=[18220], 00:08:32.740 | 99.99th=[18220] 00:08:32.740 bw ( KiB/s): min=24184, max=24968, per=23.17%, avg=24576.00, stdev=554.37, samples=2 00:08:32.740 iops : min= 6046, max= 6242, avg=6144.00, stdev=138.59, samples=2 00:08:32.740 lat (msec) : 4=0.49%, 10=44.21%, 20=55.28%, 50=0.03% 00:08:32.740 cpu : usr=3.49%, sys=6.79%, ctx=1112, majf=0, minf=1 00:08:32.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:32.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.740 issued rwts: total=5764,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.740 job3: (groupid=0, jobs=1): err= 0: pid=547879: Thu Oct 17 17:33:10 2024 00:08:32.740 read: IOPS=5641, BW=22.0MiB/s (23.1MB/s)(22.1MiB/1004msec) 00:08:32.740 slat (usec): min=2, max=6959, avg=86.07, stdev=399.75 00:08:32.740 clat (usec): min=3087, max=24864, avg=11091.23, stdev=3684.93 00:08:32.740 lat (usec): min=3276, max=24874, avg=11177.30, stdev=3703.48 00:08:32.740 clat percentiles (usec): 00:08:32.740 | 1.00th=[ 5080], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 8029], 00:08:32.740 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10421], 60.00th=[11338], 00:08:32.740 | 70.00th=[12387], 80.00th=[13698], 90.00th=[16057], 95.00th=[18220], 00:08:32.740 | 99.00th=[22676], 99.50th=[23725], 99.90th=[24773], 99.95th=[24773], 00:08:32.740 | 99.99th=[24773] 00:08:32.740 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:08:32.740 slat (usec): min=2, max=6486, avg=78.03, stdev=351.13 00:08:32.740 clat (usec): min=3515, max=23102, avg=10457.94, stdev=2885.19 00:08:32.740 lat (usec): min=3524, max=23131, avg=10535.98, stdev=2898.86 00:08:32.740 clat percentiles (usec): 00:08:32.740 | 1.00th=[ 5014], 5.00th=[ 6587], 10.00th=[ 7242], 20.00th=[ 8094], 00:08:32.740 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[10028], 60.00th=[10814], 00:08:32.740 | 70.00th=[11863], 80.00th=[12780], 90.00th=[13960], 95.00th=[15270], 00:08:32.740 | 99.00th=[19792], 99.50th=[20317], 99.90th=[22676], 99.95th=[22676], 00:08:32.740 | 99.99th=[23200] 00:08:32.740 bw ( KiB/s): min=23816, max=24576, per=22.81%, avg=24196.00, stdev=537.40, samples=2 00:08:32.740 iops : min= 5954, max= 6144, avg=6049.00, stdev=134.35, samples=2 00:08:32.740 lat (msec) : 4=0.33%, 10=46.66%, 20=51.32%, 50=1.69% 00:08:32.740 cpu : usr=3.49%, sys=7.18%, ctx=1086, majf=0, minf=1 00:08:32.740 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:32.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:32.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:32.740 issued rwts: total=5664,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:32.740 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:32.740 00:08:32.740 Run status group 0 (all jobs): 00:08:32.740 READ: bw=97.3MiB/s (102MB/s), 22.0MiB/s-28.0MiB/s (23.1MB/s-29.4MB/s), io=97.7MiB (102MB), run=1003-1004msec 00:08:32.740 WRITE: bw=104MiB/s (109MB/s), 23.9MiB/s-29.9MiB/s (25.1MB/s-31.3MB/s), io=104MiB (109MB), run=1003-1004msec 00:08:32.740 00:08:32.740 Disk stats (read/write): 00:08:32.740 nvme0n1: ios=5682/5965, merge=0/0, ticks=16592/16845, in_queue=33437, util=85.47% 00:08:32.740 nvme0n2: ios=6394/6656, merge=0/0, ticks=15643/16023, in_queue=31666, util=86.06% 00:08:32.740 nvme0n3: ios=4608/4869, merge=0/0, ticks=17259/18429, in_queue=35688, util=88.09% 00:08:32.740 nvme0n4: ios=4962/5120, merge=0/0, ticks=16341/16024, in_queue=32365, util=89.24% 00:08:32.741 17:33:10 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:32.741 [global] 00:08:32.741 thread=1 00:08:32.741 invalidate=1 00:08:32.741 rw=randwrite 00:08:32.741 time_based=1 00:08:32.741 runtime=1 00:08:32.741 ioengine=libaio 00:08:32.741 direct=1 00:08:32.741 bs=4096 00:08:32.741 iodepth=128 00:08:32.741 norandommap=0 00:08:32.741 numjobs=1 00:08:32.741 00:08:32.741 verify_dump=1 00:08:32.741 verify_backlog=512 00:08:32.741 verify_state_save=0 00:08:32.741 do_verify=1 00:08:32.741 verify=crc32c-intel 00:08:32.741 [job0] 00:08:32.741 filename=/dev/nvme0n1 00:08:32.741 [job1] 00:08:32.741 filename=/dev/nvme0n2 00:08:32.741 [job2] 00:08:32.741 filename=/dev/nvme0n3 00:08:32.741 [job3] 00:08:32.741 filename=/dev/nvme0n4 00:08:32.741 Could not set queue depth (nvme0n1) 00:08:32.741 Could not set queue depth (nvme0n2) 00:08:32.741 Could not set queue depth (nvme0n3) 00:08:32.741 Could not set queue depth (nvme0n4) 00:08:32.997 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.997 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.997 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.997 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:32.997 fio-3.35 00:08:32.997 Starting 4 threads 00:08:34.380 00:08:34.380 job0: (groupid=0, jobs=1): err= 0: pid=548174: Thu Oct 17 17:33:12 2024 00:08:34.380 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:08:34.380 slat (usec): min=2, max=5893, avg=104.88, stdev=455.81 00:08:34.380 clat (usec): min=2964, max=27523, avg=13801.01, stdev=4305.73 00:08:34.380 lat (usec): min=2986, max=27532, avg=13905.88, stdev=4320.51 00:08:34.380 clat percentiles (usec): 00:08:34.381 | 1.00th=[ 4686], 5.00th=[ 7242], 10.00th=[ 8455], 20.00th=[10421], 00:08:34.381 | 30.00th=[11469], 40.00th=[12387], 50.00th=[13566], 60.00th=[14615], 00:08:34.381 | 70.00th=[15533], 80.00th=[16909], 90.00th=[19792], 95.00th=[21365], 00:08:34.381 | 99.00th=[25035], 99.50th=[26608], 99.90th=[27132], 99.95th=[27395], 00:08:34.381 | 99.99th=[27395] 00:08:34.381 write: IOPS=5022, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1003msec); 0 zone resets 00:08:34.381 slat (usec): min=2, max=6610, avg=97.19, stdev=423.62 00:08:34.381 clat (usec): min=2839, max=25524, avg=12534.27, stdev=4295.57 00:08:34.381 lat (usec): min=2846, max=25533, avg=12631.46, stdev=4320.40 00:08:34.381 clat percentiles (usec): 00:08:34.381 | 1.00th=[ 3884], 5.00th=[ 4686], 10.00th=[ 6063], 20.00th=[ 9110], 00:08:34.381 | 30.00th=[10683], 40.00th=[11731], 50.00th=[12649], 60.00th=[13698], 00:08:34.381 | 70.00th=[14746], 80.00th=[15926], 90.00th=[17957], 95.00th=[19530], 00:08:34.381 | 99.00th=[21890], 99.50th=[22938], 99.90th=[24773], 99.95th=[25560], 00:08:34.381 | 99.99th=[25560] 00:08:34.381 bw ( KiB/s): min=18808, max=20439, per=21.45%, avg=19623.50, stdev=1153.29, samples=2 00:08:34.381 iops : min= 4702, max= 5109, avg=4905.50, stdev=287.79, samples=2 00:08:34.381 lat (msec) : 4=0.78%, 10=21.16%, 20=71.41%, 50=6.66% 00:08:34.381 cpu : usr=3.79%, sys=4.59%, ctx=1139, majf=0, minf=1 00:08:34.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:34.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.381 issued rwts: total=4608,5038,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.381 job1: (groupid=0, jobs=1): err= 0: pid=548175: Thu Oct 17 17:33:12 2024 00:08:34.381 read: IOPS=6515, BW=25.5MiB/s (26.7MB/s)(25.5MiB/1003msec) 00:08:34.381 slat (usec): min=2, max=5061, avg=77.49, stdev=374.93 00:08:34.381 clat (usec): min=2159, max=24921, avg=10189.75, stdev=3697.14 00:08:34.381 lat (usec): min=2990, max=24923, avg=10267.24, stdev=3709.07 00:08:34.381 clat percentiles (usec): 00:08:34.381 | 1.00th=[ 3752], 5.00th=[ 4883], 10.00th=[ 5800], 20.00th=[ 7111], 00:08:34.381 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[10683], 00:08:34.381 | 70.00th=[11994], 80.00th=[13173], 90.00th=[14877], 95.00th=[16909], 00:08:34.381 | 99.00th=[20317], 99.50th=[22676], 99.90th=[24511], 99.95th=[24511], 00:08:34.381 | 99.99th=[25035] 00:08:34.381 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:08:34.381 slat (usec): min=2, max=6188, avg=68.99, stdev=343.74 00:08:34.381 clat (usec): min=2329, max=23056, avg=9055.67, stdev=3608.91 00:08:34.381 lat (usec): min=2346, max=23063, avg=9124.65, stdev=3625.23 00:08:34.381 clat percentiles (usec): 00:08:34.381 | 1.00th=[ 3621], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5604], 00:08:34.381 | 30.00th=[ 6652], 40.00th=[ 7570], 50.00th=[ 8455], 60.00th=[ 9503], 00:08:34.381 | 70.00th=[11076], 80.00th=[12256], 90.00th=[14222], 95.00th=[15533], 00:08:34.381 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20317], 99.95th=[20317], 00:08:34.381 | 99.99th=[22938] 00:08:34.381 bw ( KiB/s): min=25021, max=28176, per=29.08%, avg=26598.50, stdev=2230.92, samples=2 00:08:34.381 iops : min= 6255, max= 7044, avg=6649.50, stdev=557.91, samples=2 00:08:34.381 lat (msec) : 4=2.12%, 10=55.83%, 20=41.23%, 50=0.83% 00:08:34.381 cpu : usr=4.19%, sys=6.89%, ctx=1432, majf=0, minf=1 00:08:34.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:08:34.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.381 issued rwts: total=6535,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.381 job2: (groupid=0, jobs=1): err= 0: pid=548177: Thu Oct 17 17:33:12 2024 00:08:34.381 read: IOPS=5881, BW=23.0MiB/s (24.1MB/s)(23.0MiB/1003msec) 00:08:34.381 slat (usec): min=2, max=6777, avg=84.52, stdev=391.62 00:08:34.381 clat (usec): min=1155, max=25342, avg=10888.16, stdev=4901.08 00:08:34.381 lat (usec): min=3435, max=25353, avg=10972.68, stdev=4928.98 00:08:34.381 clat percentiles (usec): 00:08:34.381 | 1.00th=[ 4293], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 7046], 00:08:34.381 | 30.00th=[ 7832], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[10028], 00:08:34.381 | 70.00th=[11863], 80.00th=[14877], 90.00th=[19530], 95.00th=[21365], 00:08:34.381 | 99.00th=[23200], 99.50th=[24511], 99.90th=[25297], 99.95th=[25297], 00:08:34.381 | 99.99th=[25297] 00:08:34.381 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:08:34.381 slat (usec): min=2, max=6169, avg=76.78, stdev=331.19 00:08:34.381 clat (usec): min=2975, max=24760, avg=10211.25, stdev=4083.36 00:08:34.381 lat (usec): min=3701, max=24785, avg=10288.03, stdev=4101.35 00:08:34.381 clat percentiles (usec): 00:08:34.381 | 1.00th=[ 4948], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 7177], 00:08:34.381 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8979], 60.00th=[ 9765], 00:08:34.381 | 70.00th=[10814], 80.00th=[12387], 90.00th=[16909], 95.00th=[19530], 00:08:34.381 | 99.00th=[23200], 99.50th=[24511], 99.90th=[24773], 99.95th=[24773], 00:08:34.381 | 99.99th=[24773] 00:08:34.381 bw ( KiB/s): min=20439, max=28672, per=26.85%, avg=24555.50, stdev=5821.61, samples=2 00:08:34.381 iops : min= 5109, max= 7168, avg=6138.50, stdev=1455.93, samples=2 00:08:34.381 lat (msec) : 2=0.01%, 4=0.23%, 10=61.67%, 20=31.96%, 50=6.13% 00:08:34.381 cpu : usr=3.79%, sys=6.19%, ctx=1314, majf=0, minf=2 00:08:34.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:08:34.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.381 issued rwts: total=5899,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.381 job3: (groupid=0, jobs=1): err= 0: pid=548181: Thu Oct 17 17:33:12 2024 00:08:34.381 read: IOPS=4830, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1004msec) 00:08:34.381 slat (usec): min=2, max=6538, avg=99.60, stdev=473.25 00:08:34.381 clat (usec): min=856, max=26911, avg=12910.55, stdev=4303.61 00:08:34.381 lat (usec): min=3648, max=26921, avg=13010.15, stdev=4313.31 00:08:34.381 clat percentiles (usec): 00:08:34.381 | 1.00th=[ 4015], 5.00th=[ 6915], 10.00th=[ 7898], 20.00th=[ 9241], 00:08:34.381 | 30.00th=[10290], 40.00th=[11469], 50.00th=[12387], 60.00th=[13566], 00:08:34.381 | 70.00th=[14877], 80.00th=[16057], 90.00th=[18482], 95.00th=[20841], 00:08:34.381 | 99.00th=[25297], 99.50th=[25297], 99.90th=[26870], 99.95th=[26870], 00:08:34.381 | 99.99th=[26870] 00:08:34.381 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:08:34.381 slat (usec): min=2, max=7729, avg=95.96, stdev=473.89 00:08:34.381 clat (usec): min=4538, max=26694, avg=12528.01, stdev=3931.39 00:08:34.381 lat (usec): min=4827, max=26706, avg=12623.97, stdev=3951.59 00:08:34.381 clat percentiles (usec): 00:08:34.381 | 1.00th=[ 5407], 5.00th=[ 6456], 10.00th=[ 7701], 20.00th=[ 9372], 00:08:34.381 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12256], 60.00th=[13173], 00:08:34.381 | 70.00th=[14222], 80.00th=[15008], 90.00th=[17433], 95.00th=[20317], 00:08:34.381 | 99.00th=[25035], 99.50th=[25822], 99.90th=[26608], 99.95th=[26608], 00:08:34.381 | 99.99th=[26608] 00:08:34.381 bw ( KiB/s): min=20439, max=20480, per=22.37%, avg=20459.50, stdev=28.99, samples=2 00:08:34.381 iops : min= 5109, max= 5120, avg=5114.50, stdev= 7.78, samples=2 00:08:34.381 lat (usec) : 1000=0.01% 00:08:34.381 lat (msec) : 4=0.45%, 10=25.23%, 20=68.53%, 50=5.79% 00:08:34.381 cpu : usr=3.09%, sys=5.78%, ctx=1247, majf=0, minf=1 00:08:34.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:08:34.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:34.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:34.381 issued rwts: total=4850,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:34.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:34.381 00:08:34.381 Run status group 0 (all jobs): 00:08:34.381 READ: bw=85.2MiB/s (89.3MB/s), 17.9MiB/s-25.5MiB/s (18.8MB/s-26.7MB/s), io=85.5MiB (89.7MB), run=1003-1004msec 00:08:34.381 WRITE: bw=89.3MiB/s (93.7MB/s), 19.6MiB/s-25.9MiB/s (20.6MB/s-27.2MB/s), io=89.7MiB (94.0MB), run=1003-1004msec 00:08:34.381 00:08:34.381 Disk stats (read/write): 00:08:34.381 nvme0n1: ios=4140/4096, merge=0/0, ticks=16169/15116, in_queue=31285, util=85.17% 00:08:34.381 nvme0n2: ios=5187/5632, merge=0/0, ticks=17085/15693, in_queue=32778, util=84.08% 00:08:34.381 nvme0n3: ios=4838/5120, merge=0/0, ticks=15987/15338, in_queue=31325, util=87.85% 00:08:34.381 nvme0n4: ios=4096/4369, merge=0/0, ticks=15887/16331, in_queue=32218, util=88.58% 00:08:34.381 17:33:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:34.381 17:33:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=548365 00:08:34.381 17:33:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:34.381 17:33:12 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:34.381 [global] 00:08:34.381 thread=1 00:08:34.381 invalidate=1 00:08:34.381 rw=read 00:08:34.381 time_based=1 00:08:34.381 runtime=10 00:08:34.381 ioengine=libaio 00:08:34.381 direct=1 00:08:34.381 bs=4096 00:08:34.381 iodepth=1 00:08:34.381 norandommap=1 00:08:34.381 numjobs=1 00:08:34.381 00:08:34.381 [job0] 00:08:34.381 filename=/dev/nvme0n1 00:08:34.381 [job1] 00:08:34.381 filename=/dev/nvme0n2 00:08:34.381 [job2] 00:08:34.381 filename=/dev/nvme0n3 00:08:34.381 [job3] 00:08:34.381 filename=/dev/nvme0n4 00:08:34.381 Could not set queue depth (nvme0n1) 00:08:34.381 Could not set queue depth (nvme0n2) 00:08:34.381 Could not set queue depth (nvme0n3) 00:08:34.381 Could not set queue depth (nvme0n4) 00:08:34.639 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:34.639 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:34.639 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:34.639 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:34.639 fio-3.35 00:08:34.639 Starting 4 threads 00:08:37.161 17:33:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:37.418 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=98717696, buflen=4096 00:08:37.418 fio: pid=548482, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:37.418 17:33:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:37.675 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=80797696, buflen=4096 00:08:37.675 fio: pid=548481, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:37.675 17:33:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:37.675 17:33:15 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:37.932 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=22806528, buflen=4096 00:08:37.932 fio: pid=548479, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:37.932 17:33:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:37.932 17:33:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:38.189 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5971968, buflen=4096 00:08:38.189 fio: pid=548480, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:38.189 17:33:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.189 17:33:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:38.189 00:08:38.189 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=548479: Thu Oct 17 17:33:16 2024 00:08:38.189 read: IOPS=6982, BW=27.3MiB/s (28.6MB/s)(85.8MiB/3144msec) 00:08:38.189 slat (usec): min=3, max=28928, avg=12.25, stdev=256.05 00:08:38.189 clat (usec): min=44, max=334, avg=129.04, stdev=32.83 00:08:38.189 lat (usec): min=54, max=29021, avg=141.28, stdev=257.89 00:08:38.189 clat percentiles (usec): 00:08:38.189 | 1.00th=[ 60], 5.00th=[ 74], 10.00th=[ 80], 20.00th=[ 93], 00:08:38.189 | 30.00th=[ 116], 40.00th=[ 133], 50.00th=[ 141], 60.00th=[ 143], 00:08:38.189 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 186], 00:08:38.189 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 215], 99.95th=[ 219], 00:08:38.189 | 99.99th=[ 243] 00:08:38.189 bw ( KiB/s): min=25832, max=33269, per=23.42%, avg=27682.17, stdev=2867.96, samples=6 00:08:38.189 iops : min= 6458, max= 8317, avg=6920.50, stdev=716.89, samples=6 00:08:38.189 lat (usec) : 50=0.01%, 100=24.12%, 250=75.86%, 500=0.01% 00:08:38.189 cpu : usr=2.58%, sys=7.73%, ctx=21959, majf=0, minf=1 00:08:38.189 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.189 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.189 issued rwts: total=21953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.189 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:38.189 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=548480: Thu Oct 17 17:33:16 2024 00:08:38.189 read: IOPS=10.1k, BW=39.5MiB/s (41.4MB/s)(134MiB/3385msec) 00:08:38.189 slat (usec): min=7, max=17590, avg=11.46, stdev=182.32 00:08:38.189 clat (usec): min=49, max=19708, avg=85.45, stdev=107.68 00:08:38.189 lat (usec): min=59, max=19717, avg=96.92, stdev=211.79 00:08:38.189 clat percentiles (usec): 00:08:38.189 | 1.00th=[ 57], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 76], 00:08:38.189 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 83], 00:08:38.189 | 70.00th=[ 85], 80.00th=[ 89], 90.00th=[ 118], 95.00th=[ 126], 00:08:38.189 | 99.00th=[ 137], 99.50th=[ 143], 99.90th=[ 167], 99.95th=[ 180], 00:08:38.189 | 99.99th=[ 367] 00:08:38.189 bw ( KiB/s): min=31792, max=44128, per=34.16%, avg=40367.17, stdev=4648.78, samples=6 00:08:38.189 iops : min= 7948, max=11032, avg=10091.67, stdev=1162.21, samples=6 00:08:38.190 lat (usec) : 50=0.01%, 100=85.64%, 250=14.33%, 500=0.01%, 750=0.01% 00:08:38.190 lat (msec) : 2=0.01%, 20=0.01% 00:08:38.190 cpu : usr=3.61%, sys=11.50%, ctx=34233, majf=0, minf=2 00:08:38.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.190 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.190 issued rwts: total=34227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:38.190 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=548481: Thu Oct 17 17:33:16 2024 00:08:38.190 read: IOPS=6671, BW=26.1MiB/s (27.3MB/s)(77.1MiB/2957msec) 00:08:38.190 slat (usec): min=7, max=15837, avg=11.08, stdev=159.09 00:08:38.190 clat (usec): min=70, max=219, avg=135.95, stdev=22.19 00:08:38.190 lat (usec): min=80, max=15958, avg=147.03, stdev=160.44 00:08:38.190 clat percentiles (usec): 00:08:38.190 | 1.00th=[ 83], 5.00th=[ 92], 10.00th=[ 98], 20.00th=[ 121], 00:08:38.190 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:08:38.190 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 165], 00:08:38.190 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 202], 99.95th=[ 204], 00:08:38.190 | 99.99th=[ 215] 00:08:38.190 bw ( KiB/s): min=25824, max=28672, per=22.39%, avg=26460.80, stdev=1238.14, samples=5 00:08:38.190 iops : min= 6456, max= 7168, avg=6615.20, stdev=309.53, samples=5 00:08:38.190 lat (usec) : 100=11.15%, 250=88.84% 00:08:38.190 cpu : usr=2.44%, sys=7.68%, ctx=19731, majf=0, minf=2 00:08:38.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.190 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.190 issued rwts: total=19727,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:38.190 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=548482: Thu Oct 17 17:33:16 2024 00:08:38.190 read: IOPS=8828, BW=34.5MiB/s (36.2MB/s)(94.1MiB/2730msec) 00:08:38.190 slat (nsec): min=8670, max=66000, avg=9334.76, stdev=921.33 00:08:38.190 clat (usec): min=73, max=348, avg=101.90, stdev=11.29 00:08:38.190 lat (usec): min=82, max=357, avg=111.24, stdev=11.32 00:08:38.190 clat percentiles (usec): 00:08:38.190 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 94], 00:08:38.190 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 102], 00:08:38.190 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 119], 95.00th=[ 125], 00:08:38.190 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 161], 99.95th=[ 165], 00:08:38.190 | 99.99th=[ 182] 00:08:38.190 bw ( KiB/s): min=31128, max=36688, per=29.96%, avg=35400.00, stdev=2392.23, samples=5 00:08:38.190 iops : min= 7782, max= 9172, avg=8850.00, stdev=598.06, samples=5 00:08:38.190 lat (usec) : 100=50.87%, 250=49.12%, 500=0.01% 00:08:38.190 cpu : usr=3.70%, sys=9.64%, ctx=24102, majf=0, minf=2 00:08:38.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:38.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.190 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:38.190 issued rwts: total=24102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:38.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:38.190 00:08:38.190 Run status group 0 (all jobs): 00:08:38.190 READ: bw=115MiB/s (121MB/s), 26.1MiB/s-39.5MiB/s (27.3MB/s-41.4MB/s), io=391MiB (410MB), run=2730-3385msec 00:08:38.190 00:08:38.190 Disk stats (read/write): 00:08:38.190 nvme0n1: ios=21609/0, merge=0/0, ticks=2684/0, in_queue=2684, util=93.68% 00:08:38.190 nvme0n2: ios=33941/0, merge=0/0, ticks=2668/0, in_queue=2668, util=94.23% 00:08:38.190 nvme0n3: ios=19012/0, merge=0/0, ticks=2477/0, in_queue=2477, util=95.51% 00:08:38.190 nvme0n4: ios=23049/0, merge=0/0, ticks=2238/0, in_queue=2238, util=96.41% 00:08:38.447 17:33:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.447 17:33:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:38.447 17:33:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.447 17:33:16 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:38.703 17:33:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.703 17:33:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:38.966 17:33:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:38.966 17:33:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:39.223 17:33:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:39.223 17:33:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 548365 00:08:39.223 17:33:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:39.223 17:33:17 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:42.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:42.497 nvmf hotplug test: fio failed as expected 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:42.497 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:42.755 rmmod nvme_rdma 00:08:42.755 rmmod nvme_fabrics 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 545429 ']' 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 545429 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 545429 ']' 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 545429 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.755 17:33:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 545429 00:08:42.755 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.755 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.755 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 545429' 00:08:42.755 killing process with pid 545429 00:08:42.755 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 545429 00:08:42.755 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 545429 00:08:43.013 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:43.013 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:08:43.013 00:08:43.013 real 0m29.183s 00:08:43.013 user 1m48.347s 00:08:43.013 sys 0m10.828s 00:08:43.013 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.013 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:43.013 ************************************ 00:08:43.013 END TEST nvmf_fio_target 00:08:43.013 ************************************ 00:08:43.013 17:33:21 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:08:43.013 17:33:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.013 17:33:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.013 17:33:21 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.013 ************************************ 00:08:43.013 START TEST nvmf_bdevio 00:08:43.013 ************************************ 00:08:43.013 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:08:43.272 * Looking for test storage... 00:08:43.272 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:43.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.272 --rc genhtml_branch_coverage=1 00:08:43.272 --rc genhtml_function_coverage=1 00:08:43.272 --rc genhtml_legend=1 00:08:43.272 --rc geninfo_all_blocks=1 00:08:43.272 --rc geninfo_unexecuted_blocks=1 00:08:43.272 00:08:43.272 ' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:43.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.272 --rc genhtml_branch_coverage=1 00:08:43.272 --rc genhtml_function_coverage=1 00:08:43.272 --rc genhtml_legend=1 00:08:43.272 --rc geninfo_all_blocks=1 00:08:43.272 --rc geninfo_unexecuted_blocks=1 00:08:43.272 00:08:43.272 ' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:43.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.272 --rc genhtml_branch_coverage=1 00:08:43.272 --rc genhtml_function_coverage=1 00:08:43.272 --rc genhtml_legend=1 00:08:43.272 --rc geninfo_all_blocks=1 00:08:43.272 --rc geninfo_unexecuted_blocks=1 00:08:43.272 00:08:43.272 ' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:43.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.272 --rc genhtml_branch_coverage=1 00:08:43.272 --rc genhtml_function_coverage=1 00:08:43.272 --rc genhtml_legend=1 00:08:43.272 --rc geninfo_all_blocks=1 00:08:43.272 --rc geninfo_unexecuted_blocks=1 00:08:43.272 00:08:43.272 ' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.272 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:08:43.272 17:33:21 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:08:49.824 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:08:49.824 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:49.824 Found net devices under 0000:18:00.0: mlx_0_0 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.824 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:49.825 Found net devices under 0000:18:00.1: mlx_0_1 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # rdma_device_init 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # uname 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:49.825 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.825 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:08:49.825 altname enp24s0f0np0 00:08:49.825 altname ens785f0np0 00:08:49.825 inet 192.168.100.8/24 scope global mlx_0_0 00:08:49.825 valid_lft forever preferred_lft forever 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:49.825 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:49.825 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:08:49.825 altname enp24s0f1np1 00:08:49.825 altname ens785f1np1 00:08:49.825 inet 192.168.100.9/24 scope global mlx_0_1 00:08:49.825 valid_lft forever preferred_lft forever 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@109 -- # continue 2 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.825 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:49.826 192.168.100.9' 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:49.826 192.168.100.9' 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # head -n 1 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:49.826 192.168.100.9' 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # tail -n +2 00:08:49.826 17:33:27 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # head -n 1 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=552558 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 552558 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 552558 ']' 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.826 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:49.826 [2024-10-17 17:33:28.095634] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:08:49.826 [2024-10-17 17:33:28.095693] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.826 [2024-10-17 17:33:28.168811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.083 [2024-10-17 17:33:28.216076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.083 [2024-10-17 17:33:28.216118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.083 [2024-10-17 17:33:28.216127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.083 [2024-10-17 17:33:28.216152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.083 [2024-10-17 17:33:28.216159] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.083 [2024-10-17 17:33:28.217649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:50.083 [2024-10-17 17:33:28.217702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:50.083 [2024-10-17 17:33:28.217802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.083 [2024-10-17 17:33:28.217802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:50.083 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.083 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:08:50.083 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:08:50.083 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.083 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.083 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.083 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:50.083 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.083 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.083 [2024-10-17 17:33:28.402951] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10fabc0/0x10ff0b0) succeed. 00:08:50.083 [2024-10-17 17:33:28.413499] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10fc250/0x1140750) succeed. 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.342 Malloc0 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.342 [2024-10-17 17:33:28.606207] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:08:50.342 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:08:50.342 { 00:08:50.342 "params": { 00:08:50.342 "name": "Nvme$subsystem", 00:08:50.342 "trtype": "$TEST_TRANSPORT", 00:08:50.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:50.342 "adrfam": "ipv4", 00:08:50.343 "trsvcid": "$NVMF_PORT", 00:08:50.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:50.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:50.343 "hdgst": ${hdgst:-false}, 00:08:50.343 "ddgst": ${ddgst:-false} 00:08:50.343 }, 00:08:50.343 "method": "bdev_nvme_attach_controller" 00:08:50.343 } 00:08:50.343 EOF 00:08:50.343 )") 00:08:50.343 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:08:50.343 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:08:50.343 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:08:50.343 17:33:28 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:08:50.343 "params": { 00:08:50.343 "name": "Nvme1", 00:08:50.343 "trtype": "rdma", 00:08:50.343 "traddr": "192.168.100.8", 00:08:50.343 "adrfam": "ipv4", 00:08:50.343 "trsvcid": "4420", 00:08:50.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:50.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:50.343 "hdgst": false, 00:08:50.343 "ddgst": false 00:08:50.343 }, 00:08:50.343 "method": "bdev_nvme_attach_controller" 00:08:50.343 }' 00:08:50.343 [2024-10-17 17:33:28.660782] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:08:50.343 [2024-10-17 17:33:28.660841] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid552583 ] 00:08:50.675 [2024-10-17 17:33:28.737048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.675 [2024-10-17 17:33:28.783978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.675 [2024-10-17 17:33:28.784065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.675 [2024-10-17 17:33:28.784067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.675 I/O targets: 00:08:50.675 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:50.675 00:08:50.675 00:08:50.675 CUnit - A unit testing framework for C - Version 2.1-3 00:08:50.675 http://cunit.sourceforge.net/ 00:08:50.675 00:08:50.675 00:08:50.675 Suite: bdevio tests on: Nvme1n1 00:08:50.675 Test: blockdev write read block ...passed 00:08:50.675 Test: blockdev write zeroes read block ...passed 00:08:50.675 Test: blockdev write zeroes read no split ...passed 00:08:50.675 Test: blockdev write zeroes read split ...passed 00:08:50.675 Test: blockdev write zeroes read split partial ...passed 00:08:50.675 Test: blockdev reset ...[2024-10-17 17:33:29.000959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:08:50.675 [2024-10-17 17:33:29.023850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:50.675 [2024-10-17 17:33:29.050828] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:50.675 passed 00:08:50.675 Test: blockdev write read 8 blocks ...passed 00:08:50.675 Test: blockdev write read size > 128k ...passed 00:08:50.675 Test: blockdev write read invalid size ...passed 00:08:50.675 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:50.675 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:50.675 Test: blockdev write read max offset ...passed 00:08:50.675 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:50.675 Test: blockdev writev readv 8 blocks ...passed 00:08:50.675 Test: blockdev writev readv 30 x 1block ...passed 00:08:50.675 Test: blockdev writev readv block ...passed 00:08:50.675 Test: blockdev writev readv size > 128k ...passed 00:08:50.675 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:50.675 Test: blockdev comparev and writev ...[2024-10-17 17:33:29.053804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:50.675 [2024-10-17 17:33:29.053833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.053846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:50.675 [2024-10-17 17:33:29.053856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.054042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:50.675 [2024-10-17 17:33:29.054053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.054064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:50.675 [2024-10-17 17:33:29.054074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.054245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:50.675 [2024-10-17 17:33:29.054256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.054267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:50.675 [2024-10-17 17:33:29.054276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.054449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:50.675 [2024-10-17 17:33:29.054461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.054471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:50.675 [2024-10-17 17:33:29.054481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:50.675 passed 00:08:50.675 Test: blockdev nvme passthru rw ...passed 00:08:50.675 Test: blockdev nvme passthru vendor specific ...[2024-10-17 17:33:29.054750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:08:50.675 [2024-10-17 17:33:29.054762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.054808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:08:50.675 [2024-10-17 17:33:29.054819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.054863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:08:50.675 [2024-10-17 17:33:29.054873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:50.675 [2024-10-17 17:33:29.054918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:08:50.675 [2024-10-17 17:33:29.054928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:50.675 passed 00:08:50.675 Test: blockdev nvme admin passthru ...passed 00:08:50.675 Test: blockdev copy ...passed 00:08:50.675 00:08:50.675 Run Summary: Type Total Ran Passed Failed Inactive 00:08:50.675 suites 1 1 n/a 0 0 00:08:50.675 tests 23 23 23 0 0 00:08:50.675 asserts 152 152 152 0 n/a 00:08:50.675 00:08:50.675 Elapsed time = 0.175 seconds 00:08:50.956 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:50.956 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.956 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:50.956 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.956 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:50.956 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:08:50.957 rmmod nvme_rdma 00:08:50.957 rmmod nvme_fabrics 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 552558 ']' 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 552558 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 552558 ']' 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 552558 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.957 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 552558 00:08:51.217 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:08:51.217 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:08:51.217 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 552558' 00:08:51.217 killing process with pid 552558 00:08:51.217 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 552558 00:08:51.217 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 552558 00:08:51.476 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:08:51.476 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:08:51.476 00:08:51.476 real 0m8.327s 00:08:51.476 user 0m8.563s 00:08:51.476 sys 0m5.575s 00:08:51.476 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.476 17:33:29 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:51.476 ************************************ 00:08:51.476 END TEST nvmf_bdevio 00:08:51.476 ************************************ 00:08:51.476 17:33:29 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:51.476 00:08:51.476 real 4m12.319s 00:08:51.476 user 10m46.736s 00:08:51.476 sys 1m33.674s 00:08:51.476 17:33:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.476 17:33:29 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.476 ************************************ 00:08:51.476 END TEST nvmf_target_core 00:08:51.476 ************************************ 00:08:51.476 17:33:29 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:08:51.476 17:33:29 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:51.476 17:33:29 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.476 17:33:29 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:51.476 ************************************ 00:08:51.476 START TEST nvmf_target_extra 00:08:51.476 ************************************ 00:08:51.476 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:08:51.735 * Looking for test storage... 00:08:51.735 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.735 --rc genhtml_branch_coverage=1 00:08:51.735 --rc genhtml_function_coverage=1 00:08:51.735 --rc genhtml_legend=1 00:08:51.735 --rc geninfo_all_blocks=1 00:08:51.735 --rc geninfo_unexecuted_blocks=1 00:08:51.735 00:08:51.735 ' 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.735 --rc genhtml_branch_coverage=1 00:08:51.735 --rc genhtml_function_coverage=1 00:08:51.735 --rc genhtml_legend=1 00:08:51.735 --rc geninfo_all_blocks=1 00:08:51.735 --rc geninfo_unexecuted_blocks=1 00:08:51.735 00:08:51.735 ' 00:08:51.735 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.735 --rc genhtml_branch_coverage=1 00:08:51.735 --rc genhtml_function_coverage=1 00:08:51.735 --rc genhtml_legend=1 00:08:51.735 --rc geninfo_all_blocks=1 00:08:51.736 --rc geninfo_unexecuted_blocks=1 00:08:51.736 00:08:51.736 ' 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.736 --rc genhtml_branch_coverage=1 00:08:51.736 --rc genhtml_function_coverage=1 00:08:51.736 --rc genhtml_legend=1 00:08:51.736 --rc geninfo_all_blocks=1 00:08:51.736 --rc geninfo_unexecuted_blocks=1 00:08:51.736 00:08:51.736 ' 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.736 17:33:29 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.736 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:08:51.736 ************************************ 00:08:51.736 START TEST nvmf_example 00:08:51.736 ************************************ 00:08:51.736 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:51.996 * Looking for test storage... 00:08:51.996 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.996 --rc genhtml_branch_coverage=1 00:08:51.996 --rc genhtml_function_coverage=1 00:08:51.996 --rc genhtml_legend=1 00:08:51.996 --rc geninfo_all_blocks=1 00:08:51.996 --rc geninfo_unexecuted_blocks=1 00:08:51.996 00:08:51.996 ' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.996 --rc genhtml_branch_coverage=1 00:08:51.996 --rc genhtml_function_coverage=1 00:08:51.996 --rc genhtml_legend=1 00:08:51.996 --rc geninfo_all_blocks=1 00:08:51.996 --rc geninfo_unexecuted_blocks=1 00:08:51.996 00:08:51.996 ' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.996 --rc genhtml_branch_coverage=1 00:08:51.996 --rc genhtml_function_coverage=1 00:08:51.996 --rc genhtml_legend=1 00:08:51.996 --rc geninfo_all_blocks=1 00:08:51.996 --rc geninfo_unexecuted_blocks=1 00:08:51.996 00:08:51.996 ' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.996 --rc genhtml_branch_coverage=1 00:08:51.996 --rc genhtml_function_coverage=1 00:08:51.996 --rc genhtml_legend=1 00:08:51.996 --rc geninfo_all_blocks=1 00:08:51.996 --rc geninfo_unexecuted_blocks=1 00:08:51.996 00:08:51.996 ' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.996 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.996 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.997 17:33:30 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:58.584 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.584 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:08:58.584 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:58.584 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:58.584 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:58.584 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:58.584 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:58.584 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:08:58.585 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:08:58.585 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:08:58.585 Found net devices under 0000:18:00.0: mlx_0_0 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:08:58.585 Found net devices under 0000:18:00.1: mlx_0_1 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # rdma_device_init 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # uname 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe ib_cm 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe ib_core 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe ib_umad 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@70 -- # modprobe iw_cm 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@528 -- # allocate_nic_ips 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # get_rdma_if_list 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:08:58.585 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:58.585 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:08:58.585 altname enp24s0f0np0 00:08:58.585 altname ens785f0np0 00:08:58.585 inet 192.168.100.8/24 scope global mlx_0_0 00:08:58.585 valid_lft forever preferred_lft forever 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:08:58.585 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:08:58.586 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:58.586 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:08:58.586 altname enp24s0f1np1 00:08:58.586 altname ens785f1np1 00:08:58.586 inet 192.168.100.9/24 scope global mlx_0_1 00:08:58.586 valid_lft forever preferred_lft forever 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # get_rdma_if_list 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_0 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@108 -- # echo mlx_0_1 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@109 -- # continue 2 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # awk '{print $4}' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # cut -d/ -f1 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:08:58.586 192.168.100.9' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:08:58.586 192.168.100.9' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # head -n 1 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:08:58.586 192.168.100.9' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # tail -n +2 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # head -n 1 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=555768 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 555768 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 555768 ']' 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.586 17:33:36 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:59.519 17:33:37 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:11.711 Initializing NVMe Controllers 00:09:11.711 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:09:11.711 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:11.711 Initialization complete. Launching workers. 00:09:11.711 ======================================================== 00:09:11.711 Latency(us) 00:09:11.711 Device Information : IOPS MiB/s Average min max 00:09:11.711 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 23341.51 91.18 2741.77 642.71 13989.59 00:09:11.711 ======================================================== 00:09:11.711 Total : 23341.51 91.18 2741.77 642.71 13989.59 00:09:11.711 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:11.711 rmmod nvme_rdma 00:09:11.711 rmmod nvme_fabrics 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 555768 ']' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 555768 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 555768 ']' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 555768 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 555768 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 555768' 00:09:11.711 killing process with pid 555768 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 555768 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 555768 00:09:11.711 nvmf threads initialize successfully 00:09:11.711 bdev subsystem init successfully 00:09:11.711 created a nvmf target service 00:09:11.711 create targets's poll groups done 00:09:11.711 all subsystems of target started 00:09:11.711 nvmf target is running 00:09:11.711 all subsystems of target stopped 00:09:11.711 destroy targets's poll groups done 00:09:11.711 destroyed the nvmf target service 00:09:11.711 bdev subsystem finish successfully 00:09:11.711 nvmf threads destroy successfully 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:11.711 00:09:11.711 real 0m19.496s 00:09:11.711 user 0m52.230s 00:09:11.711 sys 0m5.505s 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:11.711 ************************************ 00:09:11.711 END TEST nvmf_example 00:09:11.711 ************************************ 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:11.711 ************************************ 00:09:11.711 START TEST nvmf_filesystem 00:09:11.711 ************************************ 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:09:11.711 * Looking for test storage... 00:09:11.711 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.711 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:11.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.712 --rc genhtml_branch_coverage=1 00:09:11.712 --rc genhtml_function_coverage=1 00:09:11.712 --rc genhtml_legend=1 00:09:11.712 --rc geninfo_all_blocks=1 00:09:11.712 --rc geninfo_unexecuted_blocks=1 00:09:11.712 00:09:11.712 ' 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:11.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.712 --rc genhtml_branch_coverage=1 00:09:11.712 --rc genhtml_function_coverage=1 00:09:11.712 --rc genhtml_legend=1 00:09:11.712 --rc geninfo_all_blocks=1 00:09:11.712 --rc geninfo_unexecuted_blocks=1 00:09:11.712 00:09:11.712 ' 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:11.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.712 --rc genhtml_branch_coverage=1 00:09:11.712 --rc genhtml_function_coverage=1 00:09:11.712 --rc genhtml_legend=1 00:09:11.712 --rc geninfo_all_blocks=1 00:09:11.712 --rc geninfo_unexecuted_blocks=1 00:09:11.712 00:09:11.712 ' 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:11.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.712 --rc genhtml_branch_coverage=1 00:09:11.712 --rc genhtml_function_coverage=1 00:09:11.712 --rc genhtml_legend=1 00:09:11.712 --rc geninfo_all_blocks=1 00:09:11.712 --rc geninfo_unexecuted_blocks=1 00:09:11.712 00:09:11.712 ' 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:09:11.712 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:09:11.713 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:11.713 #define SPDK_CONFIG_H 00:09:11.713 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:11.713 #define SPDK_CONFIG_APPS 1 00:09:11.713 #define SPDK_CONFIG_ARCH native 00:09:11.713 #undef SPDK_CONFIG_ASAN 00:09:11.713 #undef SPDK_CONFIG_AVAHI 00:09:11.713 #undef SPDK_CONFIG_CET 00:09:11.713 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:11.713 #define SPDK_CONFIG_COVERAGE 1 00:09:11.713 #define SPDK_CONFIG_CROSS_PREFIX 00:09:11.713 #undef SPDK_CONFIG_CRYPTO 00:09:11.713 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:11.713 #undef SPDK_CONFIG_CUSTOMOCF 00:09:11.713 #undef SPDK_CONFIG_DAOS 00:09:11.713 #define SPDK_CONFIG_DAOS_DIR 00:09:11.713 #define SPDK_CONFIG_DEBUG 1 00:09:11.713 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:11.713 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:09:11.713 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:11.713 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:11.713 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:11.713 #undef SPDK_CONFIG_DPDK_UADK 00:09:11.713 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:09:11.713 #define SPDK_CONFIG_EXAMPLES 1 00:09:11.713 #undef SPDK_CONFIG_FC 00:09:11.713 #define SPDK_CONFIG_FC_PATH 00:09:11.713 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:11.713 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:11.713 #define SPDK_CONFIG_FSDEV 1 00:09:11.713 #undef SPDK_CONFIG_FUSE 00:09:11.713 #undef SPDK_CONFIG_FUZZER 00:09:11.713 #define SPDK_CONFIG_FUZZER_LIB 00:09:11.713 #undef SPDK_CONFIG_GOLANG 00:09:11.713 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:11.713 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:11.713 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:11.713 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:11.713 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:11.713 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:11.713 #undef SPDK_CONFIG_HAVE_LZ4 00:09:11.713 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:11.713 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:11.713 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:11.713 #define SPDK_CONFIG_IDXD 1 00:09:11.713 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:11.713 #undef SPDK_CONFIG_IPSEC_MB 00:09:11.713 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:11.713 #define SPDK_CONFIG_ISAL 1 00:09:11.713 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:11.713 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:11.713 #define SPDK_CONFIG_LIBDIR 00:09:11.713 #undef SPDK_CONFIG_LTO 00:09:11.713 #define SPDK_CONFIG_MAX_LCORES 128 00:09:11.713 #define SPDK_CONFIG_NVME_CUSE 1 00:09:11.713 #undef SPDK_CONFIG_OCF 00:09:11.713 #define SPDK_CONFIG_OCF_PATH 00:09:11.713 #define SPDK_CONFIG_OPENSSL_PATH 00:09:11.713 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:11.713 #define SPDK_CONFIG_PGO_DIR 00:09:11.713 #undef SPDK_CONFIG_PGO_USE 00:09:11.713 #define SPDK_CONFIG_PREFIX /usr/local 00:09:11.713 #undef SPDK_CONFIG_RAID5F 00:09:11.713 #undef SPDK_CONFIG_RBD 00:09:11.713 #define SPDK_CONFIG_RDMA 1 00:09:11.713 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:11.713 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:11.713 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:11.713 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:11.713 #define SPDK_CONFIG_SHARED 1 00:09:11.713 #undef SPDK_CONFIG_SMA 00:09:11.713 #define SPDK_CONFIG_TESTS 1 00:09:11.713 #undef SPDK_CONFIG_TSAN 00:09:11.713 #define SPDK_CONFIG_UBLK 1 00:09:11.713 #define SPDK_CONFIG_UBSAN 1 00:09:11.713 #undef SPDK_CONFIG_UNIT_TESTS 00:09:11.713 #undef SPDK_CONFIG_URING 00:09:11.713 #define SPDK_CONFIG_URING_PATH 00:09:11.713 #undef SPDK_CONFIG_URING_ZNS 00:09:11.713 #undef SPDK_CONFIG_USDT 00:09:11.713 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:11.713 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:11.713 #undef SPDK_CONFIG_VFIO_USER 00:09:11.713 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:11.713 #define SPDK_CONFIG_VHOST 1 00:09:11.713 #define SPDK_CONFIG_VIRTIO 1 00:09:11.713 #undef SPDK_CONFIG_VTUNE 00:09:11.714 #define SPDK_CONFIG_VTUNE_DIR 00:09:11.714 #define SPDK_CONFIG_WERROR 1 00:09:11.714 #define SPDK_CONFIG_WPDK_DIR 00:09:11.714 #undef SPDK_CONFIG_XNVME 00:09:11.714 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:09:11.714 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:11.715 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j72 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=rdma 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 557656 ]] 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 557656 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:09:11.716 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.VarFeW 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.VarFeW/tests/target /tmp/spdk.VarFeW 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=51631063040 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61734383616 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10103320576 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30852395008 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30867189760 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=14794752 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12323995648 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12346880000 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22884352 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30866784256 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30867193856 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=409600 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6173425664 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6173437952 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:09:11.717 * Looking for test storage... 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=51631063040 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=12317913088 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:11.717 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:11.717 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:11.718 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:11.718 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:11.718 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:09:11.718 17:33:49 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:11.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.718 --rc genhtml_branch_coverage=1 00:09:11.718 --rc genhtml_function_coverage=1 00:09:11.718 --rc genhtml_legend=1 00:09:11.718 --rc geninfo_all_blocks=1 00:09:11.718 --rc geninfo_unexecuted_blocks=1 00:09:11.718 00:09:11.718 ' 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:11.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.718 --rc genhtml_branch_coverage=1 00:09:11.718 --rc genhtml_function_coverage=1 00:09:11.718 --rc genhtml_legend=1 00:09:11.718 --rc geninfo_all_blocks=1 00:09:11.718 --rc geninfo_unexecuted_blocks=1 00:09:11.718 00:09:11.718 ' 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:11.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.718 --rc genhtml_branch_coverage=1 00:09:11.718 --rc genhtml_function_coverage=1 00:09:11.718 --rc genhtml_legend=1 00:09:11.718 --rc geninfo_all_blocks=1 00:09:11.718 --rc geninfo_unexecuted_blocks=1 00:09:11.718 00:09:11.718 ' 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:11.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.718 --rc genhtml_branch_coverage=1 00:09:11.718 --rc genhtml_function_coverage=1 00:09:11.718 --rc genhtml_legend=1 00:09:11.718 --rc geninfo_all_blocks=1 00:09:11.718 --rc geninfo_unexecuted_blocks=1 00:09:11.718 00:09:11.718 ' 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.718 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.976 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.977 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.977 17:33:50 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:09:18.532 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:09:18.532 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:18.532 Found net devices under 0000:18:00.0: mlx_0_0 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:18.532 Found net devices under 0000:18:00.1: mlx_0_1 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # rdma_device_init 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # uname 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@528 -- # allocate_nic_ips 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:18.532 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:18.533 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.533 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:09:18.533 altname enp24s0f0np0 00:09:18.533 altname ens785f0np0 00:09:18.533 inet 192.168.100.8/24 scope global mlx_0_0 00:09:18.533 valid_lft forever preferred_lft forever 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:18.533 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.533 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:09:18.533 altname enp24s0f1np1 00:09:18.533 altname ens785f1np1 00:09:18.533 inet 192.168.100.9/24 scope global mlx_0_1 00:09:18.533 valid_lft forever preferred_lft forever 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@109 -- # continue 2 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:18.533 192.168.100.9' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:18.533 192.168.100.9' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # head -n 1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:18.533 192.168.100.9' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # tail -n +2 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # head -n 1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:18.533 ************************************ 00:09:18.533 START TEST nvmf_filesystem_no_in_capsule 00:09:18.533 ************************************ 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=560583 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 560583 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 560583 ']' 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.533 [2024-10-17 17:33:56.614645] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:09:18.533 [2024-10-17 17:33:56.614710] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.533 [2024-10-17 17:33:56.690682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.533 [2024-10-17 17:33:56.736843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.533 [2024-10-17 17:33:56.736888] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.533 [2024-10-17 17:33:56.736898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.533 [2024-10-17 17:33:56.736906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.533 [2024-10-17 17:33:56.736914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.533 [2024-10-17 17:33:56.738306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.533 [2024-10-17 17:33:56.738397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.533 [2024-10-17 17:33:56.738426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:18.533 [2024-10-17 17:33:56.738429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:18.533 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.534 17:33:56 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.534 [2024-10-17 17:33:56.899796] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:18.534 [2024-10-17 17:33:56.920136] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7142c0/0x7187b0) succeed. 00:09:18.791 [2024-10-17 17:33:56.930612] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x715950/0x759e50) succeed. 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.791 Malloc1 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.791 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.048 [2024-10-17 17:33:57.188478] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:19.048 { 00:09:19.048 "name": "Malloc1", 00:09:19.048 "aliases": [ 00:09:19.048 "82cb6970-153c-4da6-aab7-c98ac3d514ca" 00:09:19.048 ], 00:09:19.048 "product_name": "Malloc disk", 00:09:19.048 "block_size": 512, 00:09:19.048 "num_blocks": 1048576, 00:09:19.048 "uuid": "82cb6970-153c-4da6-aab7-c98ac3d514ca", 00:09:19.048 "assigned_rate_limits": { 00:09:19.048 "rw_ios_per_sec": 0, 00:09:19.048 "rw_mbytes_per_sec": 0, 00:09:19.048 "r_mbytes_per_sec": 0, 00:09:19.048 "w_mbytes_per_sec": 0 00:09:19.048 }, 00:09:19.048 "claimed": true, 00:09:19.048 "claim_type": "exclusive_write", 00:09:19.048 "zoned": false, 00:09:19.048 "supported_io_types": { 00:09:19.048 "read": true, 00:09:19.048 "write": true, 00:09:19.048 "unmap": true, 00:09:19.048 "flush": true, 00:09:19.048 "reset": true, 00:09:19.048 "nvme_admin": false, 00:09:19.048 "nvme_io": false, 00:09:19.048 "nvme_io_md": false, 00:09:19.048 "write_zeroes": true, 00:09:19.048 "zcopy": true, 00:09:19.048 "get_zone_info": false, 00:09:19.048 "zone_management": false, 00:09:19.048 "zone_append": false, 00:09:19.048 "compare": false, 00:09:19.048 "compare_and_write": false, 00:09:19.048 "abort": true, 00:09:19.048 "seek_hole": false, 00:09:19.048 "seek_data": false, 00:09:19.048 "copy": true, 00:09:19.048 "nvme_iov_md": false 00:09:19.048 }, 00:09:19.048 "memory_domains": [ 00:09:19.048 { 00:09:19.048 "dma_device_id": "system", 00:09:19.048 "dma_device_type": 1 00:09:19.048 }, 00:09:19.048 { 00:09:19.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.048 "dma_device_type": 2 00:09:19.048 } 00:09:19.048 ], 00:09:19.048 "driver_specific": {} 00:09:19.048 } 00:09:19.048 ]' 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:19.048 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:19.049 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:19.049 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:19.049 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:19.049 17:33:57 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:20.945 17:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.945 17:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:20.945 17:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.945 17:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:20.945 17:33:58 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:22.839 17:34:00 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:22.839 17:34:01 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:23.771 ************************************ 00:09:23.771 START TEST filesystem_ext4 00:09:23.771 ************************************ 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:23.771 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:23.771 mke2fs 1.47.0 (5-Feb-2023) 00:09:24.029 Discarding device blocks: 0/522240 done 00:09:24.029 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:24.029 Filesystem UUID: 0616d24e-3310-43f9-8fe6-7c5a6a33db1b 00:09:24.029 Superblock backups stored on blocks: 00:09:24.029 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:24.029 00:09:24.029 Allocating group tables: 0/64 done 00:09:24.029 Writing inode tables: 0/64 done 00:09:24.029 Creating journal (8192 blocks): done 00:09:24.029 Writing superblocks and filesystem accounting information: 0/64 done 00:09:24.029 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 560583 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:24.029 00:09:24.029 real 0m0.206s 00:09:24.029 user 0m0.036s 00:09:24.029 sys 0m0.068s 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:24.029 ************************************ 00:09:24.029 END TEST filesystem_ext4 00:09:24.029 ************************************ 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.029 ************************************ 00:09:24.029 START TEST filesystem_btrfs 00:09:24.029 ************************************ 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:24.029 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:24.287 btrfs-progs v6.8.1 00:09:24.287 See https://btrfs.readthedocs.io for more information. 00:09:24.287 00:09:24.287 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:24.287 NOTE: several default settings have changed in version 5.15, please make sure 00:09:24.287 this does not affect your deployments: 00:09:24.287 - DUP for metadata (-m dup) 00:09:24.287 - enabled no-holes (-O no-holes) 00:09:24.287 - enabled free-space-tree (-R free-space-tree) 00:09:24.287 00:09:24.287 Label: (null) 00:09:24.287 UUID: 5f3a0220-d431-4a14-8ad8-1808d031ffc5 00:09:24.287 Node size: 16384 00:09:24.287 Sector size: 4096 (CPU page size: 4096) 00:09:24.287 Filesystem size: 510.00MiB 00:09:24.287 Block group profiles: 00:09:24.287 Data: single 8.00MiB 00:09:24.287 Metadata: DUP 32.00MiB 00:09:24.287 System: DUP 8.00MiB 00:09:24.287 SSD detected: yes 00:09:24.287 Zoned device: no 00:09:24.287 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:24.287 Checksum: crc32c 00:09:24.287 Number of devices: 1 00:09:24.287 Devices: 00:09:24.287 ID SIZE PATH 00:09:24.287 1 510.00MiB /dev/nvme0n1p1 00:09:24.287 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 560583 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:24.287 00:09:24.287 real 0m0.253s 00:09:24.287 user 0m0.032s 00:09:24.287 sys 0m0.129s 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.287 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:24.287 ************************************ 00:09:24.287 END TEST filesystem_btrfs 00:09:24.287 ************************************ 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:24.545 ************************************ 00:09:24.545 START TEST filesystem_xfs 00:09:24.545 ************************************ 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:24.545 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:24.545 = sectsz=512 attr=2, projid32bit=1 00:09:24.545 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:24.545 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:24.545 data = bsize=4096 blocks=130560, imaxpct=25 00:09:24.545 = sunit=0 swidth=0 blks 00:09:24.545 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:24.545 log =internal log bsize=4096 blocks=16384, version=2 00:09:24.545 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:24.545 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:24.545 Discarding blocks...Done. 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:24.545 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:24.802 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 560583 00:09:24.802 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:24.802 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:24.802 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:24.802 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:24.802 00:09:24.802 real 0m0.233s 00:09:24.803 user 0m0.031s 00:09:24.803 sys 0m0.080s 00:09:24.803 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:24.803 17:34:02 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:24.803 ************************************ 00:09:24.803 END TEST filesystem_xfs 00:09:24.803 ************************************ 00:09:24.803 17:34:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:24.803 17:34:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:24.803 17:34:03 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:28.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 560583 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 560583 ']' 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 560583 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 560583 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:28.077 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:28.078 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 560583' 00:09:28.078 killing process with pid 560583 00:09:28.078 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 560583 00:09:28.078 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 560583 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:28.644 00:09:28.644 real 0m10.179s 00:09:28.644 user 0m39.928s 00:09:28.644 sys 0m1.308s 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.644 ************************************ 00:09:28.644 END TEST nvmf_filesystem_no_in_capsule 00:09:28.644 ************************************ 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:28.644 ************************************ 00:09:28.644 START TEST nvmf_filesystem_in_capsule 00:09:28.644 ************************************ 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=562088 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 562088 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 562088 ']' 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.644 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.645 17:34:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.645 [2024-10-17 17:34:06.883876] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:09:28.645 [2024-10-17 17:34:06.883933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.645 [2024-10-17 17:34:06.957092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.645 [2024-10-17 17:34:07.001376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.645 [2024-10-17 17:34:07.001426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.645 [2024-10-17 17:34:07.001436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.645 [2024-10-17 17:34:07.001460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.645 [2024-10-17 17:34:07.001467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.645 [2024-10-17 17:34:07.002867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.645 [2024-10-17 17:34:07.002951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.645 [2024-10-17 17:34:07.003045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.645 [2024-10-17 17:34:07.003047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.902 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:28.902 [2024-10-17 17:34:07.179527] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6872c0/0x68b7b0) succeed. 00:09:28.902 [2024-10-17 17:34:07.190121] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x688950/0x6cce50) succeed. 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.159 Malloc1 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.159 [2024-10-17 17:34:07.491958] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.159 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:29.159 { 00:09:29.159 "name": "Malloc1", 00:09:29.159 "aliases": [ 00:09:29.159 "4b31ea33-50b5-4882-b6ef-63e34c292b41" 00:09:29.159 ], 00:09:29.159 "product_name": "Malloc disk", 00:09:29.159 "block_size": 512, 00:09:29.159 "num_blocks": 1048576, 00:09:29.159 "uuid": "4b31ea33-50b5-4882-b6ef-63e34c292b41", 00:09:29.159 "assigned_rate_limits": { 00:09:29.159 "rw_ios_per_sec": 0, 00:09:29.159 "rw_mbytes_per_sec": 0, 00:09:29.159 "r_mbytes_per_sec": 0, 00:09:29.159 "w_mbytes_per_sec": 0 00:09:29.159 }, 00:09:29.159 "claimed": true, 00:09:29.159 "claim_type": "exclusive_write", 00:09:29.159 "zoned": false, 00:09:29.159 "supported_io_types": { 00:09:29.159 "read": true, 00:09:29.159 "write": true, 00:09:29.159 "unmap": true, 00:09:29.159 "flush": true, 00:09:29.159 "reset": true, 00:09:29.159 "nvme_admin": false, 00:09:29.159 "nvme_io": false, 00:09:29.159 "nvme_io_md": false, 00:09:29.159 "write_zeroes": true, 00:09:29.159 "zcopy": true, 00:09:29.159 "get_zone_info": false, 00:09:29.159 "zone_management": false, 00:09:29.159 "zone_append": false, 00:09:29.159 "compare": false, 00:09:29.159 "compare_and_write": false, 00:09:29.159 "abort": true, 00:09:29.159 "seek_hole": false, 00:09:29.159 "seek_data": false, 00:09:29.159 "copy": true, 00:09:29.159 "nvme_iov_md": false 00:09:29.159 }, 00:09:29.160 "memory_domains": [ 00:09:29.160 { 00:09:29.160 "dma_device_id": "system", 00:09:29.160 "dma_device_type": 1 00:09:29.160 }, 00:09:29.160 { 00:09:29.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.160 "dma_device_type": 2 00:09:29.160 } 00:09:29.160 ], 00:09:29.160 "driver_specific": {} 00:09:29.160 } 00:09:29.160 ]' 00:09:29.160 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:29.417 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:29.417 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:29.417 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:29.417 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:29.417 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:29.417 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:29.417 17:34:07 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:31.314 17:34:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.314 17:34:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:31.314 17:34:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.314 17:34:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:31.314 17:34:09 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:33.211 17:34:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.143 ************************************ 00:09:34.143 START TEST filesystem_in_capsule_ext4 00:09:34.143 ************************************ 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:34.143 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:34.143 mke2fs 1.47.0 (5-Feb-2023) 00:09:34.143 Discarding device blocks: 0/522240 done 00:09:34.143 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:34.143 Filesystem UUID: 67179bcf-50b4-4351-902f-8aa16bf738a5 00:09:34.143 Superblock backups stored on blocks: 00:09:34.143 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:34.143 00:09:34.401 Allocating group tables: 0/64 done 00:09:34.401 Writing inode tables: 0/64 done 00:09:34.401 Creating journal (8192 blocks): done 00:09:34.401 Writing superblocks and filesystem accounting information: 0/64 done 00:09:34.401 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 562088 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:34.401 00:09:34.401 real 0m0.208s 00:09:34.401 user 0m0.027s 00:09:34.401 sys 0m0.074s 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:34.401 ************************************ 00:09:34.401 END TEST filesystem_in_capsule_ext4 00:09:34.401 ************************************ 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.401 ************************************ 00:09:34.401 START TEST filesystem_in_capsule_btrfs 00:09:34.401 ************************************ 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:34.401 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:34.659 btrfs-progs v6.8.1 00:09:34.659 See https://btrfs.readthedocs.io for more information. 00:09:34.659 00:09:34.659 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:34.659 NOTE: several default settings have changed in version 5.15, please make sure 00:09:34.659 this does not affect your deployments: 00:09:34.659 - DUP for metadata (-m dup) 00:09:34.659 - enabled no-holes (-O no-holes) 00:09:34.659 - enabled free-space-tree (-R free-space-tree) 00:09:34.659 00:09:34.659 Label: (null) 00:09:34.659 UUID: e51b4863-2486-41ff-8378-e61a4f5eeb76 00:09:34.659 Node size: 16384 00:09:34.659 Sector size: 4096 (CPU page size: 4096) 00:09:34.659 Filesystem size: 510.00MiB 00:09:34.659 Block group profiles: 00:09:34.659 Data: single 8.00MiB 00:09:34.659 Metadata: DUP 32.00MiB 00:09:34.659 System: DUP 8.00MiB 00:09:34.659 SSD detected: yes 00:09:34.659 Zoned device: no 00:09:34.659 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:34.659 Checksum: crc32c 00:09:34.659 Number of devices: 1 00:09:34.659 Devices: 00:09:34.659 ID SIZE PATH 00:09:34.659 1 510.00MiB /dev/nvme0n1p1 00:09:34.659 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 562088 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:34.659 00:09:34.659 real 0m0.262s 00:09:34.659 user 0m0.032s 00:09:34.659 sys 0m0.127s 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.659 17:34:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:34.659 ************************************ 00:09:34.659 END TEST filesystem_in_capsule_btrfs 00:09:34.659 ************************************ 00:09:34.659 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:34.659 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:34.659 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.659 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:34.917 ************************************ 00:09:34.917 START TEST filesystem_in_capsule_xfs 00:09:34.917 ************************************ 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:34.917 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:34.918 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:34.918 = sectsz=512 attr=2, projid32bit=1 00:09:34.918 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:34.918 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:34.918 data = bsize=4096 blocks=130560, imaxpct=25 00:09:34.918 = sunit=0 swidth=0 blks 00:09:34.918 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:34.918 log =internal log bsize=4096 blocks=16384, version=2 00:09:34.918 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:34.918 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:34.918 Discarding blocks...Done. 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 562088 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:34.918 00:09:34.918 real 0m0.188s 00:09:34.918 user 0m0.024s 00:09:34.918 sys 0m0.075s 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:34.918 ************************************ 00:09:34.918 END TEST filesystem_in_capsule_xfs 00:09:34.918 ************************************ 00:09:34.918 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:35.176 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:35.176 17:34:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:38.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.463 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:38.463 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:38.463 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:38.463 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:38.463 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:38.463 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:38.463 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:38.463 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 562088 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 562088 ']' 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 562088 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 562088 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 562088' 00:09:38.464 killing process with pid 562088 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 562088 00:09:38.464 17:34:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 562088 00:09:39.040 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:39.040 00:09:39.040 real 0m10.298s 00:09:39.041 user 0m40.301s 00:09:39.041 sys 0m1.409s 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.041 ************************************ 00:09:39.041 END TEST nvmf_filesystem_in_capsule 00:09:39.041 ************************************ 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:39.041 rmmod nvme_rdma 00:09:39.041 rmmod nvme_fabrics 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:39.041 00:09:39.041 real 0m27.590s 00:09:39.041 user 1m22.370s 00:09:39.041 sys 0m7.935s 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:39.041 ************************************ 00:09:39.041 END TEST nvmf_filesystem 00:09:39.041 ************************************ 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:39.041 ************************************ 00:09:39.041 START TEST nvmf_target_discovery 00:09:39.041 ************************************ 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:39.041 * Looking for test storage... 00:09:39.041 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:09:39.041 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.301 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:39.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.302 --rc genhtml_branch_coverage=1 00:09:39.302 --rc genhtml_function_coverage=1 00:09:39.302 --rc genhtml_legend=1 00:09:39.302 --rc geninfo_all_blocks=1 00:09:39.302 --rc geninfo_unexecuted_blocks=1 00:09:39.302 00:09:39.302 ' 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:39.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.302 --rc genhtml_branch_coverage=1 00:09:39.302 --rc genhtml_function_coverage=1 00:09:39.302 --rc genhtml_legend=1 00:09:39.302 --rc geninfo_all_blocks=1 00:09:39.302 --rc geninfo_unexecuted_blocks=1 00:09:39.302 00:09:39.302 ' 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:39.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.302 --rc genhtml_branch_coverage=1 00:09:39.302 --rc genhtml_function_coverage=1 00:09:39.302 --rc genhtml_legend=1 00:09:39.302 --rc geninfo_all_blocks=1 00:09:39.302 --rc geninfo_unexecuted_blocks=1 00:09:39.302 00:09:39.302 ' 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:39.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.302 --rc genhtml_branch_coverage=1 00:09:39.302 --rc genhtml_function_coverage=1 00:09:39.302 --rc genhtml_legend=1 00:09:39.302 --rc geninfo_all_blocks=1 00:09:39.302 --rc geninfo_unexecuted_blocks=1 00:09:39.302 00:09:39.302 ' 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.302 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:09:39.302 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.303 17:34:17 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.895 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:09:45.896 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:09:45.896 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:45.896 Found net devices under 0000:18:00.0: mlx_0_0 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:45.896 Found net devices under 0000:18:00.1: mlx_0_1 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # rdma_device_init 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # uname 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@528 -- # allocate_nic_ips 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:45.896 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:46.156 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:46.156 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:09:46.156 altname enp24s0f0np0 00:09:46.156 altname ens785f0np0 00:09:46.156 inet 192.168.100.8/24 scope global mlx_0_0 00:09:46.156 valid_lft forever preferred_lft forever 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:46.156 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:46.156 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:09:46.156 altname enp24s0f1np1 00:09:46.156 altname ens785f1np1 00:09:46.156 inet 192.168.100.9/24 scope global mlx_0_1 00:09:46.156 valid_lft forever preferred_lft forever 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:46.156 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@109 -- # continue 2 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:46.157 192.168.100.9' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:46.157 192.168.100.9' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # head -n 1 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:46.157 192.168.100.9' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # tail -n +2 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # head -n 1 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=566697 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 566697 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 566697 ']' 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:46.157 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.157 [2024-10-17 17:34:24.483508] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:09:46.157 [2024-10-17 17:34:24.483571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.416 [2024-10-17 17:34:24.556630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.416 [2024-10-17 17:34:24.605081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.416 [2024-10-17 17:34:24.605127] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.416 [2024-10-17 17:34:24.605137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.416 [2024-10-17 17:34:24.605145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.416 [2024-10-17 17:34:24.605152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.416 [2024-10-17 17:34:24.606565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.416 [2024-10-17 17:34:24.606653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.416 [2024-10-17 17:34:24.606734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.416 [2024-10-17 17:34:24.606737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.416 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:46.416 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:09:46.416 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:46.416 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:46.416 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.416 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.416 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:46.416 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.416 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.417 [2024-10-17 17:34:24.793562] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19142c0/0x19187b0) succeed. 00:09:46.417 [2024-10-17 17:34:24.804210] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1915950/0x1959e50) succeed. 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 Null1 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 [2024-10-17 17:34:24.985934] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 Null2 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:24 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 Null3 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.676 Null4 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.676 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 4420 00:09:46.936 00:09:46.936 Discovery Log Number of Records 6, Generation counter 6 00:09:46.936 =====Discovery Log Entry 0====== 00:09:46.936 trtype: rdma 00:09:46.936 adrfam: ipv4 00:09:46.936 subtype: current discovery subsystem 00:09:46.936 treq: not required 00:09:46.936 portid: 0 00:09:46.936 trsvcid: 4420 00:09:46.936 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:46.936 traddr: 192.168.100.8 00:09:46.936 eflags: explicit discovery connections, duplicate discovery information 00:09:46.936 rdma_prtype: not specified 00:09:46.936 rdma_qptype: connected 00:09:46.936 rdma_cms: rdma-cm 00:09:46.936 rdma_pkey: 0x0000 00:09:46.936 =====Discovery Log Entry 1====== 00:09:46.936 trtype: rdma 00:09:46.936 adrfam: ipv4 00:09:46.936 subtype: nvme subsystem 00:09:46.936 treq: not required 00:09:46.936 portid: 0 00:09:46.936 trsvcid: 4420 00:09:46.936 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:46.936 traddr: 192.168.100.8 00:09:46.936 eflags: none 00:09:46.936 rdma_prtype: not specified 00:09:46.936 rdma_qptype: connected 00:09:46.936 rdma_cms: rdma-cm 00:09:46.936 rdma_pkey: 0x0000 00:09:46.936 =====Discovery Log Entry 2====== 00:09:46.936 trtype: rdma 00:09:46.936 adrfam: ipv4 00:09:46.936 subtype: nvme subsystem 00:09:46.936 treq: not required 00:09:46.936 portid: 0 00:09:46.936 trsvcid: 4420 00:09:46.936 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:46.936 traddr: 192.168.100.8 00:09:46.936 eflags: none 00:09:46.936 rdma_prtype: not specified 00:09:46.936 rdma_qptype: connected 00:09:46.936 rdma_cms: rdma-cm 00:09:46.936 rdma_pkey: 0x0000 00:09:46.936 =====Discovery Log Entry 3====== 00:09:46.936 trtype: rdma 00:09:46.936 adrfam: ipv4 00:09:46.936 subtype: nvme subsystem 00:09:46.936 treq: not required 00:09:46.936 portid: 0 00:09:46.936 trsvcid: 4420 00:09:46.936 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:46.936 traddr: 192.168.100.8 00:09:46.936 eflags: none 00:09:46.936 rdma_prtype: not specified 00:09:46.936 rdma_qptype: connected 00:09:46.936 rdma_cms: rdma-cm 00:09:46.936 rdma_pkey: 0x0000 00:09:46.936 =====Discovery Log Entry 4====== 00:09:46.936 trtype: rdma 00:09:46.936 adrfam: ipv4 00:09:46.936 subtype: nvme subsystem 00:09:46.936 treq: not required 00:09:46.936 portid: 0 00:09:46.936 trsvcid: 4420 00:09:46.936 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:46.936 traddr: 192.168.100.8 00:09:46.936 eflags: none 00:09:46.936 rdma_prtype: not specified 00:09:46.936 rdma_qptype: connected 00:09:46.936 rdma_cms: rdma-cm 00:09:46.936 rdma_pkey: 0x0000 00:09:46.936 =====Discovery Log Entry 5====== 00:09:46.936 trtype: rdma 00:09:46.936 adrfam: ipv4 00:09:46.936 subtype: discovery subsystem referral 00:09:46.936 treq: not required 00:09:46.936 portid: 0 00:09:46.936 trsvcid: 4430 00:09:46.936 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:46.936 traddr: 192.168.100.8 00:09:46.936 eflags: none 00:09:46.936 rdma_prtype: unrecognized 00:09:46.936 rdma_qptype: unrecognized 00:09:46.936 rdma_cms: unrecognized 00:09:46.936 rdma_pkey: 0x0000 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:46.936 Perform nvmf subsystem discovery via RPC 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.936 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.936 [ 00:09:46.936 { 00:09:46.936 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:46.936 "subtype": "Discovery", 00:09:46.936 "listen_addresses": [ 00:09:46.936 { 00:09:46.936 "trtype": "RDMA", 00:09:46.936 "adrfam": "IPv4", 00:09:46.936 "traddr": "192.168.100.8", 00:09:46.936 "trsvcid": "4420" 00:09:46.936 } 00:09:46.936 ], 00:09:46.936 "allow_any_host": true, 00:09:46.936 "hosts": [] 00:09:46.936 }, 00:09:46.936 { 00:09:46.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:46.936 "subtype": "NVMe", 00:09:46.936 "listen_addresses": [ 00:09:46.936 { 00:09:46.936 "trtype": "RDMA", 00:09:46.936 "adrfam": "IPv4", 00:09:46.936 "traddr": "192.168.100.8", 00:09:46.936 "trsvcid": "4420" 00:09:46.936 } 00:09:46.936 ], 00:09:46.936 "allow_any_host": true, 00:09:46.936 "hosts": [], 00:09:46.936 "serial_number": "SPDK00000000000001", 00:09:46.936 "model_number": "SPDK bdev Controller", 00:09:46.936 "max_namespaces": 32, 00:09:46.936 "min_cntlid": 1, 00:09:46.936 "max_cntlid": 65519, 00:09:46.936 "namespaces": [ 00:09:46.936 { 00:09:46.936 "nsid": 1, 00:09:46.936 "bdev_name": "Null1", 00:09:46.936 "name": "Null1", 00:09:46.936 "nguid": "065678D1D2E84F319B50621E2D22A608", 00:09:46.936 "uuid": "065678d1-d2e8-4f31-9b50-621e2d22a608" 00:09:46.936 } 00:09:46.936 ] 00:09:46.936 }, 00:09:46.936 { 00:09:46.936 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:46.936 "subtype": "NVMe", 00:09:46.936 "listen_addresses": [ 00:09:46.936 { 00:09:46.936 "trtype": "RDMA", 00:09:46.936 "adrfam": "IPv4", 00:09:46.936 "traddr": "192.168.100.8", 00:09:46.936 "trsvcid": "4420" 00:09:46.936 } 00:09:46.936 ], 00:09:46.936 "allow_any_host": true, 00:09:46.936 "hosts": [], 00:09:46.936 "serial_number": "SPDK00000000000002", 00:09:46.936 "model_number": "SPDK bdev Controller", 00:09:46.936 "max_namespaces": 32, 00:09:46.936 "min_cntlid": 1, 00:09:46.936 "max_cntlid": 65519, 00:09:46.936 "namespaces": [ 00:09:46.936 { 00:09:46.936 "nsid": 1, 00:09:46.936 "bdev_name": "Null2", 00:09:46.936 "name": "Null2", 00:09:46.936 "nguid": "5EA7EBD75DF842CD8CEF1916AF93B1FF", 00:09:46.936 "uuid": "5ea7ebd7-5df8-42cd-8cef-1916af93b1ff" 00:09:46.936 } 00:09:46.936 ] 00:09:46.936 }, 00:09:46.936 { 00:09:46.936 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:46.936 "subtype": "NVMe", 00:09:46.936 "listen_addresses": [ 00:09:46.936 { 00:09:46.936 "trtype": "RDMA", 00:09:46.936 "adrfam": "IPv4", 00:09:46.936 "traddr": "192.168.100.8", 00:09:46.936 "trsvcid": "4420" 00:09:46.936 } 00:09:46.936 ], 00:09:46.936 "allow_any_host": true, 00:09:46.936 "hosts": [], 00:09:46.936 "serial_number": "SPDK00000000000003", 00:09:46.936 "model_number": "SPDK bdev Controller", 00:09:46.936 "max_namespaces": 32, 00:09:46.936 "min_cntlid": 1, 00:09:46.936 "max_cntlid": 65519, 00:09:46.937 "namespaces": [ 00:09:46.937 { 00:09:46.937 "nsid": 1, 00:09:46.937 "bdev_name": "Null3", 00:09:46.937 "name": "Null3", 00:09:46.937 "nguid": "F78E4496F00F42EAA393E177D6A24EDF", 00:09:46.937 "uuid": "f78e4496-f00f-42ea-a393-e177d6a24edf" 00:09:46.937 } 00:09:46.937 ] 00:09:46.937 }, 00:09:46.937 { 00:09:46.937 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:46.937 "subtype": "NVMe", 00:09:46.937 "listen_addresses": [ 00:09:46.937 { 00:09:46.937 "trtype": "RDMA", 00:09:46.937 "adrfam": "IPv4", 00:09:46.937 "traddr": "192.168.100.8", 00:09:46.937 "trsvcid": "4420" 00:09:46.937 } 00:09:46.937 ], 00:09:46.937 "allow_any_host": true, 00:09:46.937 "hosts": [], 00:09:46.937 "serial_number": "SPDK00000000000004", 00:09:46.937 "model_number": "SPDK bdev Controller", 00:09:46.937 "max_namespaces": 32, 00:09:46.937 "min_cntlid": 1, 00:09:46.937 "max_cntlid": 65519, 00:09:46.937 "namespaces": [ 00:09:46.937 { 00:09:46.937 "nsid": 1, 00:09:46.937 "bdev_name": "Null4", 00:09:46.937 "name": "Null4", 00:09:46.937 "nguid": "C5A9D822C719435583E8B8C056AB93EC", 00:09:46.937 "uuid": "c5a9d822-c719-4355-83e8-b8c056ab93ec" 00:09:46.937 } 00:09:46.937 ] 00:09:46.937 } 00:09:46.937 ] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.937 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:47.196 rmmod nvme_rdma 00:09:47.196 rmmod nvme_fabrics 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 566697 ']' 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 566697 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 566697 ']' 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 566697 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 566697 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 566697' 00:09:47.196 killing process with pid 566697 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 566697 00:09:47.196 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 566697 00:09:47.455 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:47.455 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:47.455 00:09:47.455 real 0m8.415s 00:09:47.455 user 0m6.352s 00:09:47.455 sys 0m5.783s 00:09:47.455 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.455 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:47.455 ************************************ 00:09:47.455 END TEST nvmf_target_discovery 00:09:47.455 ************************************ 00:09:47.455 17:34:25 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:47.455 17:34:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.455 17:34:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.455 17:34:25 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:47.455 ************************************ 00:09:47.455 START TEST nvmf_referrals 00:09:47.455 ************************************ 00:09:47.455 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:47.714 * Looking for test storage... 00:09:47.714 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:47.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.714 --rc genhtml_branch_coverage=1 00:09:47.714 --rc genhtml_function_coverage=1 00:09:47.714 --rc genhtml_legend=1 00:09:47.714 --rc geninfo_all_blocks=1 00:09:47.714 --rc geninfo_unexecuted_blocks=1 00:09:47.714 00:09:47.714 ' 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:47.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.714 --rc genhtml_branch_coverage=1 00:09:47.714 --rc genhtml_function_coverage=1 00:09:47.714 --rc genhtml_legend=1 00:09:47.714 --rc geninfo_all_blocks=1 00:09:47.714 --rc geninfo_unexecuted_blocks=1 00:09:47.714 00:09:47.714 ' 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:47.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.714 --rc genhtml_branch_coverage=1 00:09:47.714 --rc genhtml_function_coverage=1 00:09:47.714 --rc genhtml_legend=1 00:09:47.714 --rc geninfo_all_blocks=1 00:09:47.714 --rc geninfo_unexecuted_blocks=1 00:09:47.714 00:09:47.714 ' 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:47.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.714 --rc genhtml_branch_coverage=1 00:09:47.714 --rc genhtml_function_coverage=1 00:09:47.714 --rc genhtml_legend=1 00:09:47.714 --rc geninfo_all_blocks=1 00:09:47.714 --rc geninfo_unexecuted_blocks=1 00:09:47.714 00:09:47.714 ' 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.714 17:34:25 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.714 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.714 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:47.714 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.715 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:09:47.715 17:34:26 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.283 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:09:54.284 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:09:54.284 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:09:54.284 Found net devices under 0000:18:00.0: mlx_0_0 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:09:54.284 Found net devices under 0000:18:00.1: mlx_0_1 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # rdma_device_init 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # uname 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe ib_cm 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe ib_core 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe ib_umad 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@70 -- # modprobe iw_cm 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@528 -- # allocate_nic_ips 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # get_rdma_if_list 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:09:54.284 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:54.284 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:09:54.284 altname enp24s0f0np0 00:09:54.284 altname ens785f0np0 00:09:54.284 inet 192.168.100.8/24 scope global mlx_0_0 00:09:54.284 valid_lft forever preferred_lft forever 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:54.284 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:09:54.285 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:54.285 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:09:54.285 altname enp24s0f1np1 00:09:54.285 altname ens785f1np1 00:09:54.285 inet 192.168.100.9/24 scope global mlx_0_1 00:09:54.285 valid_lft forever preferred_lft forever 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # get_rdma_if_list 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_0 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@108 -- # echo mlx_0_1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@109 -- # continue 2 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # awk '{print $4}' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # cut -d/ -f1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:09:54.285 192.168.100.9' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:09:54.285 192.168.100.9' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # head -n 1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:09:54.285 192.168.100.9' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # head -n 1 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # tail -n +2 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=569846 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 569846 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 569846 ']' 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.285 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.543 [2024-10-17 17:34:32.709722] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:09:54.543 [2024-10-17 17:34:32.709780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.543 [2024-10-17 17:34:32.783180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:54.543 [2024-10-17 17:34:32.828229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:54.543 [2024-10-17 17:34:32.828275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:54.543 [2024-10-17 17:34:32.828284] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:54.543 [2024-10-17 17:34:32.828293] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:54.543 [2024-10-17 17:34:32.828300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:54.543 [2024-10-17 17:34:32.829741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.543 [2024-10-17 17:34:32.829758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:54.543 [2024-10-17 17:34:32.829835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.543 [2024-10-17 17:34:32.829837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.802 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:54.802 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:09:54.802 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:54.802 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:54.802 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.802 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.802 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:54.802 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.802 17:34:32 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.802 [2024-10-17 17:34:33.018061] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e8b2c0/0x1e8f7b0) succeed. 00:09:54.802 [2024-10-17 17:34:33.028598] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e8c950/0x1ed0e50) succeed. 00:09:54.802 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.802 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:54.802 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.802 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.802 [2024-10-17 17:34:33.175678] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:54.802 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.802 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:54.802 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.802 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:54.802 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.061 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:55.319 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:55.576 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.834 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:55.834 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:55.834 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:55.834 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:55.834 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:55.834 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:55.834 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:55.834 17:34:33 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:55.834 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:56.092 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:09:56.092 rmmod nvme_rdma 00:09:56.092 rmmod nvme_fabrics 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 569846 ']' 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 569846 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 569846 ']' 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 569846 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 569846 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 569846' 00:09:56.349 killing process with pid 569846 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 569846 00:09:56.349 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 569846 00:09:56.607 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:56.607 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:09:56.607 00:09:56.607 real 0m9.019s 00:09:56.607 user 0m10.436s 00:09:56.607 sys 0m5.958s 00:09:56.607 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:56.607 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:56.607 ************************************ 00:09:56.607 END TEST nvmf_referrals 00:09:56.607 ************************************ 00:09:56.607 17:34:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:56.607 17:34:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:56.607 17:34:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:56.607 17:34:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:56.607 ************************************ 00:09:56.607 START TEST nvmf_connect_disconnect 00:09:56.607 ************************************ 00:09:56.607 17:34:34 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:56.867 * Looking for test storage... 00:09:56.867 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:56.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.867 --rc genhtml_branch_coverage=1 00:09:56.867 --rc genhtml_function_coverage=1 00:09:56.867 --rc genhtml_legend=1 00:09:56.867 --rc geninfo_all_blocks=1 00:09:56.867 --rc geninfo_unexecuted_blocks=1 00:09:56.867 00:09:56.867 ' 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:56.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.867 --rc genhtml_branch_coverage=1 00:09:56.867 --rc genhtml_function_coverage=1 00:09:56.867 --rc genhtml_legend=1 00:09:56.867 --rc geninfo_all_blocks=1 00:09:56.867 --rc geninfo_unexecuted_blocks=1 00:09:56.867 00:09:56.867 ' 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:56.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.867 --rc genhtml_branch_coverage=1 00:09:56.867 --rc genhtml_function_coverage=1 00:09:56.867 --rc genhtml_legend=1 00:09:56.867 --rc geninfo_all_blocks=1 00:09:56.867 --rc geninfo_unexecuted_blocks=1 00:09:56.867 00:09:56.867 ' 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:56.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.867 --rc genhtml_branch_coverage=1 00:09:56.867 --rc genhtml_function_coverage=1 00:09:56.867 --rc genhtml_legend=1 00:09:56.867 --rc geninfo_all_blocks=1 00:09:56.867 --rc geninfo_unexecuted_blocks=1 00:09:56.867 00:09:56.867 ' 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.867 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.868 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:09:56.868 17:34:35 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:10:03.429 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:10:03.429 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:03.429 Found net devices under 0000:18:00.0: mlx_0_0 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:03.429 Found net devices under 0000:18:00.1: mlx_0_1 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # rdma_device_init 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:03.429 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # uname 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:03.430 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:03.430 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:10:03.430 altname enp24s0f0np0 00:10:03.430 altname ens785f0np0 00:10:03.430 inet 192.168.100.8/24 scope global mlx_0_0 00:10:03.430 valid_lft forever preferred_lft forever 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:03.430 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:03.430 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:10:03.430 altname enp24s0f1np1 00:10:03.430 altname ens785f1np1 00:10:03.430 inet 192.168.100.9/24 scope global mlx_0_1 00:10:03.430 valid_lft forever preferred_lft forever 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@109 -- # continue 2 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:03.430 192.168.100.9' 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:03.430 192.168.100.9' 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # head -n 1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # head -n 1 00:10:03.430 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:03.430 192.168.100.9' 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # tail -n +2 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=573103 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 573103 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 573103 ']' 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.431 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.431 [2024-10-17 17:34:41.755937] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:10:03.431 [2024-10-17 17:34:41.755997] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.689 [2024-10-17 17:34:41.831634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.689 [2024-10-17 17:34:41.877951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.689 [2024-10-17 17:34:41.877996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.689 [2024-10-17 17:34:41.878006] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.689 [2024-10-17 17:34:41.878014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.689 [2024-10-17 17:34:41.878021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.689 [2024-10-17 17:34:41.879407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.689 [2024-10-17 17:34:41.879431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.689 [2024-10-17 17:34:41.879485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.689 [2024-10-17 17:34:41.879486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.689 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.689 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:10:03.689 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:03.689 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.689 17:34:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.689 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.689 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:10:03.689 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.689 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.689 [2024-10-17 17:34:42.026682] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:10:03.689 [2024-10-17 17:34:42.046861] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13a52c0/0x13a97b0) succeed. 00:10:03.689 [2024-10-17 17:34:42.057166] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13a6950/0x13eae50) succeed. 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:03.972 [2024-10-17 17:34:42.208441] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:03.972 17:34:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:10.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:38.355 rmmod nvme_rdma 00:10:38.355 rmmod nvme_fabrics 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 573103 ']' 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 573103 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 573103 ']' 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 573103 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 573103 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 573103' 00:10:38.355 killing process with pid 573103 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 573103 00:10:38.355 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 573103 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:38.612 00:10:38.612 real 0m41.917s 00:10:38.612 user 2m21.055s 00:10:38.612 sys 0m6.844s 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:38.612 ************************************ 00:10:38.612 END TEST nvmf_connect_disconnect 00:10:38.612 ************************************ 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.612 ************************************ 00:10:38.612 START TEST nvmf_multitarget 00:10:38.612 ************************************ 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:10:38.612 * Looking for test storage... 00:10:38.612 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:10:38.612 17:35:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:38.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.871 --rc genhtml_branch_coverage=1 00:10:38.871 --rc genhtml_function_coverage=1 00:10:38.871 --rc genhtml_legend=1 00:10:38.871 --rc geninfo_all_blocks=1 00:10:38.871 --rc geninfo_unexecuted_blocks=1 00:10:38.871 00:10:38.871 ' 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:38.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.871 --rc genhtml_branch_coverage=1 00:10:38.871 --rc genhtml_function_coverage=1 00:10:38.871 --rc genhtml_legend=1 00:10:38.871 --rc geninfo_all_blocks=1 00:10:38.871 --rc geninfo_unexecuted_blocks=1 00:10:38.871 00:10:38.871 ' 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:38.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.871 --rc genhtml_branch_coverage=1 00:10:38.871 --rc genhtml_function_coverage=1 00:10:38.871 --rc genhtml_legend=1 00:10:38.871 --rc geninfo_all_blocks=1 00:10:38.871 --rc geninfo_unexecuted_blocks=1 00:10:38.871 00:10:38.871 ' 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:38.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.871 --rc genhtml_branch_coverage=1 00:10:38.871 --rc genhtml_function_coverage=1 00:10:38.871 --rc genhtml_legend=1 00:10:38.871 --rc geninfo_all_blocks=1 00:10:38.871 --rc geninfo_unexecuted_blocks=1 00:10:38.871 00:10:38.871 ' 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:10:38.871 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.872 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.872 17:35:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:10:45.429 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:10:45.429 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:45.429 Found net devices under 0000:18:00.0: mlx_0_0 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:45.429 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:45.430 Found net devices under 0000:18:00.1: mlx_0_1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # rdma_device_init 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # uname 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:45.430 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:45.430 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:10:45.430 altname enp24s0f0np0 00:10:45.430 altname ens785f0np0 00:10:45.430 inet 192.168.100.8/24 scope global mlx_0_0 00:10:45.430 valid_lft forever preferred_lft forever 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:45.430 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:45.430 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:10:45.430 altname enp24s0f1np1 00:10:45.430 altname ens785f1np1 00:10:45.430 inet 192.168.100.9/24 scope global mlx_0_1 00:10:45.430 valid_lft forever preferred_lft forever 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@109 -- # continue 2 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:45.430 192.168.100.9' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:45.430 192.168.100.9' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # head -n 1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # tail -n +2 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:45.430 192.168.100.9' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # head -n 1 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:45.430 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=580785 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 580785 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 580785 ']' 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:45.431 [2024-10-17 17:35:23.494020] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:10:45.431 [2024-10-17 17:35:23.494077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.431 [2024-10-17 17:35:23.566665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.431 [2024-10-17 17:35:23.608726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.431 [2024-10-17 17:35:23.608774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.431 [2024-10-17 17:35:23.608784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.431 [2024-10-17 17:35:23.608807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.431 [2024-10-17 17:35:23.608816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.431 [2024-10-17 17:35:23.610158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.431 [2024-10-17 17:35:23.610244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.431 [2024-10-17 17:35:23.610340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.431 [2024-10-17 17:35:23.610341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:45.431 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:45.688 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:45.688 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:45.688 "nvmf_tgt_1" 00:10:45.688 17:35:23 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:45.688 "nvmf_tgt_2" 00:10:45.945 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:45.945 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:45.945 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:45.945 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:45.945 true 00:10:45.945 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:46.202 true 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:10:46.202 rmmod nvme_rdma 00:10:46.202 rmmod nvme_fabrics 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:46.202 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 580785 ']' 00:10:46.203 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 580785 00:10:46.203 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 580785 ']' 00:10:46.203 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 580785 00:10:46.203 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:10:46.203 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:46.203 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 580785 00:10:46.461 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:46.461 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:46.461 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 580785' 00:10:46.461 killing process with pid 580785 00:10:46.461 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 580785 00:10:46.461 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 580785 00:10:46.461 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:46.461 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:10:46.461 00:10:46.461 real 0m7.907s 00:10:46.461 user 0m7.309s 00:10:46.461 sys 0m5.379s 00:10:46.461 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:46.461 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:46.461 ************************************ 00:10:46.461 END TEST nvmf_multitarget 00:10:46.461 ************************************ 00:10:46.720 17:35:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:46.720 17:35:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:46.720 17:35:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:46.720 17:35:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:46.720 ************************************ 00:10:46.720 START TEST nvmf_rpc 00:10:46.720 ************************************ 00:10:46.720 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:10:46.720 * Looking for test storage... 00:10:46.720 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:46.720 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.720 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.720 17:35:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:46.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.720 --rc genhtml_branch_coverage=1 00:10:46.720 --rc genhtml_function_coverage=1 00:10:46.720 --rc genhtml_legend=1 00:10:46.720 --rc geninfo_all_blocks=1 00:10:46.720 --rc geninfo_unexecuted_blocks=1 00:10:46.720 00:10:46.720 ' 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:46.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.720 --rc genhtml_branch_coverage=1 00:10:46.720 --rc genhtml_function_coverage=1 00:10:46.720 --rc genhtml_legend=1 00:10:46.720 --rc geninfo_all_blocks=1 00:10:46.720 --rc geninfo_unexecuted_blocks=1 00:10:46.720 00:10:46.720 ' 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:46.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.720 --rc genhtml_branch_coverage=1 00:10:46.720 --rc genhtml_function_coverage=1 00:10:46.720 --rc genhtml_legend=1 00:10:46.720 --rc geninfo_all_blocks=1 00:10:46.720 --rc geninfo_unexecuted_blocks=1 00:10:46.720 00:10:46.720 ' 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:46.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.720 --rc genhtml_branch_coverage=1 00:10:46.720 --rc genhtml_function_coverage=1 00:10:46.720 --rc genhtml_legend=1 00:10:46.720 --rc geninfo_all_blocks=1 00:10:46.720 --rc geninfo_unexecuted_blocks=1 00:10:46.720 00:10:46.720 ' 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.720 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.721 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.721 17:35:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:10:53.291 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:10:53.291 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:10:53.291 Found net devices under 0000:18:00.0: mlx_0_0 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:10:53.291 Found net devices under 0000:18:00.1: mlx_0_1 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # rdma_device_init 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # uname 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe ib_cm 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe ib_core 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe ib_umad 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@70 -- # modprobe iw_cm 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@528 -- # allocate_nic_ips 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # get_rdma_if_list 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:10:53.291 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.291 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:10:53.291 altname enp24s0f0np0 00:10:53.291 altname ens785f0np0 00:10:53.291 inet 192.168.100.8/24 scope global mlx_0_0 00:10:53.291 valid_lft forever preferred_lft forever 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:53.291 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:10:53.292 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:53.292 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:10:53.292 altname enp24s0f1np1 00:10:53.292 altname ens785f1np1 00:10:53.292 inet 192.168.100.9/24 scope global mlx_0_1 00:10:53.292 valid_lft forever preferred_lft forever 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # get_rdma_if_list 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_0 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@108 -- # echo mlx_0_1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@109 -- # continue 2 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # awk '{print $4}' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # cut -d/ -f1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:10:53.292 192.168.100.9' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:10:53.292 192.168.100.9' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # head -n 1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:10:53.292 192.168.100.9' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # tail -n +2 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # head -n 1 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=583861 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 583861 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 583861 ']' 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.292 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.292 [2024-10-17 17:35:31.515357] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:10:53.292 [2024-10-17 17:35:31.515434] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.292 [2024-10-17 17:35:31.591732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.292 [2024-10-17 17:35:31.638363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.292 [2024-10-17 17:35:31.638414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.292 [2024-10-17 17:35:31.638450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.292 [2024-10-17 17:35:31.638458] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.292 [2024-10-17 17:35:31.638465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.292 [2024-10-17 17:35:31.639863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.292 [2024-10-17 17:35:31.639954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.292 [2024-10-17 17:35:31.640033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.292 [2024-10-17 17:35:31.640035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:53.550 "tick_rate": 2300000000, 00:10:53.550 "poll_groups": [ 00:10:53.550 { 00:10:53.550 "name": "nvmf_tgt_poll_group_000", 00:10:53.550 "admin_qpairs": 0, 00:10:53.550 "io_qpairs": 0, 00:10:53.550 "current_admin_qpairs": 0, 00:10:53.550 "current_io_qpairs": 0, 00:10:53.550 "pending_bdev_io": 0, 00:10:53.550 "completed_nvme_io": 0, 00:10:53.550 "transports": [] 00:10:53.550 }, 00:10:53.550 { 00:10:53.550 "name": "nvmf_tgt_poll_group_001", 00:10:53.550 "admin_qpairs": 0, 00:10:53.550 "io_qpairs": 0, 00:10:53.550 "current_admin_qpairs": 0, 00:10:53.550 "current_io_qpairs": 0, 00:10:53.550 "pending_bdev_io": 0, 00:10:53.550 "completed_nvme_io": 0, 00:10:53.550 "transports": [] 00:10:53.550 }, 00:10:53.550 { 00:10:53.550 "name": "nvmf_tgt_poll_group_002", 00:10:53.550 "admin_qpairs": 0, 00:10:53.550 "io_qpairs": 0, 00:10:53.550 "current_admin_qpairs": 0, 00:10:53.550 "current_io_qpairs": 0, 00:10:53.550 "pending_bdev_io": 0, 00:10:53.550 "completed_nvme_io": 0, 00:10:53.550 "transports": [] 00:10:53.550 }, 00:10:53.550 { 00:10:53.550 "name": "nvmf_tgt_poll_group_003", 00:10:53.550 "admin_qpairs": 0, 00:10:53.550 "io_qpairs": 0, 00:10:53.550 "current_admin_qpairs": 0, 00:10:53.550 "current_io_qpairs": 0, 00:10:53.550 "pending_bdev_io": 0, 00:10:53.550 "completed_nvme_io": 0, 00:10:53.550 "transports": [] 00:10:53.550 } 00:10:53.550 ] 00:10:53.550 }' 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.550 17:35:31 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.550 [2024-10-17 17:35:31.938385] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f91350/0x1f95840) succeed. 00:10:53.808 [2024-10-17 17:35:31.948820] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f929e0/0x1fd6ee0) succeed. 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:53.808 "tick_rate": 2300000000, 00:10:53.808 "poll_groups": [ 00:10:53.808 { 00:10:53.808 "name": "nvmf_tgt_poll_group_000", 00:10:53.808 "admin_qpairs": 0, 00:10:53.808 "io_qpairs": 0, 00:10:53.808 "current_admin_qpairs": 0, 00:10:53.808 "current_io_qpairs": 0, 00:10:53.808 "pending_bdev_io": 0, 00:10:53.808 "completed_nvme_io": 0, 00:10:53.808 "transports": [ 00:10:53.808 { 00:10:53.808 "trtype": "RDMA", 00:10:53.808 "pending_data_buffer": 0, 00:10:53.808 "devices": [ 00:10:53.808 { 00:10:53.808 "name": "mlx5_0", 00:10:53.808 "polls": 15254, 00:10:53.808 "idle_polls": 15254, 00:10:53.808 "completions": 0, 00:10:53.808 "requests": 0, 00:10:53.808 "request_latency": 0, 00:10:53.808 "pending_free_request": 0, 00:10:53.808 "pending_rdma_read": 0, 00:10:53.808 "pending_rdma_write": 0, 00:10:53.808 "pending_rdma_send": 0, 00:10:53.808 "total_send_wrs": 0, 00:10:53.808 "send_doorbell_updates": 0, 00:10:53.808 "total_recv_wrs": 4096, 00:10:53.808 "recv_doorbell_updates": 1 00:10:53.808 }, 00:10:53.808 { 00:10:53.808 "name": "mlx5_1", 00:10:53.808 "polls": 15254, 00:10:53.808 "idle_polls": 15254, 00:10:53.808 "completions": 0, 00:10:53.808 "requests": 0, 00:10:53.808 "request_latency": 0, 00:10:53.808 "pending_free_request": 0, 00:10:53.808 "pending_rdma_read": 0, 00:10:53.808 "pending_rdma_write": 0, 00:10:53.808 "pending_rdma_send": 0, 00:10:53.808 "total_send_wrs": 0, 00:10:53.808 "send_doorbell_updates": 0, 00:10:53.808 "total_recv_wrs": 4096, 00:10:53.808 "recv_doorbell_updates": 1 00:10:53.808 } 00:10:53.808 ] 00:10:53.808 } 00:10:53.808 ] 00:10:53.808 }, 00:10:53.808 { 00:10:53.808 "name": "nvmf_tgt_poll_group_001", 00:10:53.808 "admin_qpairs": 0, 00:10:53.808 "io_qpairs": 0, 00:10:53.808 "current_admin_qpairs": 0, 00:10:53.808 "current_io_qpairs": 0, 00:10:53.808 "pending_bdev_io": 0, 00:10:53.808 "completed_nvme_io": 0, 00:10:53.808 "transports": [ 00:10:53.808 { 00:10:53.808 "trtype": "RDMA", 00:10:53.808 "pending_data_buffer": 0, 00:10:53.808 "devices": [ 00:10:53.808 { 00:10:53.808 "name": "mlx5_0", 00:10:53.808 "polls": 9858, 00:10:53.808 "idle_polls": 9858, 00:10:53.808 "completions": 0, 00:10:53.808 "requests": 0, 00:10:53.808 "request_latency": 0, 00:10:53.808 "pending_free_request": 0, 00:10:53.808 "pending_rdma_read": 0, 00:10:53.808 "pending_rdma_write": 0, 00:10:53.808 "pending_rdma_send": 0, 00:10:53.808 "total_send_wrs": 0, 00:10:53.808 "send_doorbell_updates": 0, 00:10:53.808 "total_recv_wrs": 4096, 00:10:53.808 "recv_doorbell_updates": 1 00:10:53.808 }, 00:10:53.808 { 00:10:53.808 "name": "mlx5_1", 00:10:53.808 "polls": 9858, 00:10:53.808 "idle_polls": 9858, 00:10:53.808 "completions": 0, 00:10:53.808 "requests": 0, 00:10:53.808 "request_latency": 0, 00:10:53.808 "pending_free_request": 0, 00:10:53.808 "pending_rdma_read": 0, 00:10:53.808 "pending_rdma_write": 0, 00:10:53.808 "pending_rdma_send": 0, 00:10:53.808 "total_send_wrs": 0, 00:10:53.808 "send_doorbell_updates": 0, 00:10:53.808 "total_recv_wrs": 4096, 00:10:53.808 "recv_doorbell_updates": 1 00:10:53.808 } 00:10:53.808 ] 00:10:53.808 } 00:10:53.808 ] 00:10:53.808 }, 00:10:53.808 { 00:10:53.808 "name": "nvmf_tgt_poll_group_002", 00:10:53.808 "admin_qpairs": 0, 00:10:53.808 "io_qpairs": 0, 00:10:53.808 "current_admin_qpairs": 0, 00:10:53.808 "current_io_qpairs": 0, 00:10:53.808 "pending_bdev_io": 0, 00:10:53.808 "completed_nvme_io": 0, 00:10:53.808 "transports": [ 00:10:53.808 { 00:10:53.808 "trtype": "RDMA", 00:10:53.808 "pending_data_buffer": 0, 00:10:53.808 "devices": [ 00:10:53.808 { 00:10:53.808 "name": "mlx5_0", 00:10:53.808 "polls": 5386, 00:10:53.808 "idle_polls": 5386, 00:10:53.808 "completions": 0, 00:10:53.808 "requests": 0, 00:10:53.808 "request_latency": 0, 00:10:53.808 "pending_free_request": 0, 00:10:53.808 "pending_rdma_read": 0, 00:10:53.808 "pending_rdma_write": 0, 00:10:53.808 "pending_rdma_send": 0, 00:10:53.808 "total_send_wrs": 0, 00:10:53.808 "send_doorbell_updates": 0, 00:10:53.808 "total_recv_wrs": 4096, 00:10:53.808 "recv_doorbell_updates": 1 00:10:53.808 }, 00:10:53.808 { 00:10:53.808 "name": "mlx5_1", 00:10:53.808 "polls": 5386, 00:10:53.808 "idle_polls": 5386, 00:10:53.808 "completions": 0, 00:10:53.808 "requests": 0, 00:10:53.808 "request_latency": 0, 00:10:53.808 "pending_free_request": 0, 00:10:53.808 "pending_rdma_read": 0, 00:10:53.808 "pending_rdma_write": 0, 00:10:53.808 "pending_rdma_send": 0, 00:10:53.808 "total_send_wrs": 0, 00:10:53.808 "send_doorbell_updates": 0, 00:10:53.808 "total_recv_wrs": 4096, 00:10:53.808 "recv_doorbell_updates": 1 00:10:53.808 } 00:10:53.808 ] 00:10:53.808 } 00:10:53.808 ] 00:10:53.808 }, 00:10:53.808 { 00:10:53.808 "name": "nvmf_tgt_poll_group_003", 00:10:53.808 "admin_qpairs": 0, 00:10:53.808 "io_qpairs": 0, 00:10:53.808 "current_admin_qpairs": 0, 00:10:53.808 "current_io_qpairs": 0, 00:10:53.808 "pending_bdev_io": 0, 00:10:53.808 "completed_nvme_io": 0, 00:10:53.808 "transports": [ 00:10:53.808 { 00:10:53.808 "trtype": "RDMA", 00:10:53.808 "pending_data_buffer": 0, 00:10:53.808 "devices": [ 00:10:53.808 { 00:10:53.808 "name": "mlx5_0", 00:10:53.808 "polls": 898, 00:10:53.808 "idle_polls": 898, 00:10:53.808 "completions": 0, 00:10:53.808 "requests": 0, 00:10:53.808 "request_latency": 0, 00:10:53.808 "pending_free_request": 0, 00:10:53.808 "pending_rdma_read": 0, 00:10:53.808 "pending_rdma_write": 0, 00:10:53.808 "pending_rdma_send": 0, 00:10:53.808 "total_send_wrs": 0, 00:10:53.808 "send_doorbell_updates": 0, 00:10:53.808 "total_recv_wrs": 4096, 00:10:53.808 "recv_doorbell_updates": 1 00:10:53.808 }, 00:10:53.808 { 00:10:53.808 "name": "mlx5_1", 00:10:53.808 "polls": 898, 00:10:53.808 "idle_polls": 898, 00:10:53.808 "completions": 0, 00:10:53.808 "requests": 0, 00:10:53.808 "request_latency": 0, 00:10:53.808 "pending_free_request": 0, 00:10:53.808 "pending_rdma_read": 0, 00:10:53.808 "pending_rdma_write": 0, 00:10:53.808 "pending_rdma_send": 0, 00:10:53.808 "total_send_wrs": 0, 00:10:53.808 "send_doorbell_updates": 0, 00:10:53.808 "total_recv_wrs": 4096, 00:10:53.808 "recv_doorbell_updates": 1 00:10:53.808 } 00:10:53.808 ] 00:10:53.808 } 00:10:53.808 ] 00:10:53.808 } 00:10:53.808 ] 00:10:53.808 }' 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:53.808 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.066 Malloc1 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.066 [2024-10-17 17:35:32.396399] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:54.066 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -s 4420 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -s 4420 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:54.067 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -s 4420 00:10:54.067 [2024-10-17 17:35:32.432136] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c' 00:10:54.324 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:54.324 could not add new controller: failed to write to nvme-fabrics device 00:10:54.324 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:54.324 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:54.324 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:54.324 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:54.324 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:10:54.324 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.324 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.324 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.324 17:35:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:55.705 17:35:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.705 17:35:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:55.705 17:35:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.705 17:35:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:55.705 17:35:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:58.228 17:35:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:58.228 17:35:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:58.228 17:35:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.228 17:35:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:58.228 17:35:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.228 17:35:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:58.228 17:35:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:01.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:01.501 [2024-10-17 17:35:39.332057] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c' 00:11:01.501 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:01.501 could not add new controller: failed to write to nvme-fabrics device 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.501 17:35:39 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:02.868 17:35:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.868 17:35:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:02.868 17:35:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.868 17:35:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:02.868 17:35:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:04.762 17:35:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:04.762 17:35:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:04.762 17:35:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:04.762 17:35:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:04.762 17:35:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:04.762 17:35:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:04.762 17:35:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.036 [2024-10-17 17:35:46.300741] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.036 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.037 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.037 17:35:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:09.939 17:35:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.939 17:35:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.939 17:35:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.939 17:35:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:09.939 17:35:47 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:11.832 17:35:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:11.832 17:35:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:11.832 17:35:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.832 17:35:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:11.832 17:35:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.832 17:35:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:11.832 17:35:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 [2024-10-17 17:35:53.183147] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.105 17:35:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:16.478 17:35:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.478 17:35:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.478 17:35:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.478 17:35:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:16.478 17:35:54 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:18.490 17:35:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:18.490 17:35:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:18.490 17:35:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.490 17:35:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:18.490 17:35:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.490 17:35:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:18.490 17:35:56 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.833 [2024-10-17 17:36:00.070798] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.833 17:36:00 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:23.731 17:36:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.731 17:36:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:23.731 17:36:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.731 17:36:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:23.731 17:36:01 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:25.632 17:36:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:25.632 17:36:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.632 17:36:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:25.632 17:36:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:25.632 17:36:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.632 17:36:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:25.632 17:36:03 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.911 [2024-10-17 17:36:06.909329] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.911 17:36:06 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:30.284 17:36:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.284 17:36:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:30.284 17:36:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.284 17:36:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:30.284 17:36:08 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:32.186 17:36:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:32.186 17:36:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:32.186 17:36:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.186 17:36:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:32.186 17:36:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.186 17:36:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:32.186 17:36:10 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.469 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.469 [2024-10-17 17:36:13.797109] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.469 17:36:13 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:37.368 17:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.368 17:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:37.368 17:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.368 17:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:37.368 17:36:15 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:39.266 17:36:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:39.266 17:36:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:39.266 17:36:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.266 17:36:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:39.266 17:36:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.266 17:36:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:39.266 17:36:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 [2024-10-17 17:36:20.663301] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 [2024-10-17 17:36:20.712078] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 [2024-10-17 17:36:20.760234] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.548 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 [2024-10-17 17:36:20.808410] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 [2024-10-17 17:36:20.856578] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.549 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:42.808 "tick_rate": 2300000000, 00:11:42.808 "poll_groups": [ 00:11:42.808 { 00:11:42.808 "name": "nvmf_tgt_poll_group_000", 00:11:42.808 "admin_qpairs": 2, 00:11:42.808 "io_qpairs": 27, 00:11:42.808 "current_admin_qpairs": 0, 00:11:42.808 "current_io_qpairs": 0, 00:11:42.808 "pending_bdev_io": 0, 00:11:42.808 "completed_nvme_io": 41, 00:11:42.808 "transports": [ 00:11:42.808 { 00:11:42.808 "trtype": "RDMA", 00:11:42.808 "pending_data_buffer": 0, 00:11:42.808 "devices": [ 00:11:42.808 { 00:11:42.808 "name": "mlx5_0", 00:11:42.808 "polls": 5977966, 00:11:42.808 "idle_polls": 5977763, 00:11:42.808 "completions": 203, 00:11:42.808 "requests": 101, 00:11:42.808 "request_latency": 10781408, 00:11:42.808 "pending_free_request": 0, 00:11:42.808 "pending_rdma_read": 0, 00:11:42.808 "pending_rdma_write": 0, 00:11:42.808 "pending_rdma_send": 0, 00:11:42.808 "total_send_wrs": 145, 00:11:42.808 "send_doorbell_updates": 102, 00:11:42.808 "total_recv_wrs": 4197, 00:11:42.808 "recv_doorbell_updates": 102 00:11:42.808 }, 00:11:42.808 { 00:11:42.808 "name": "mlx5_1", 00:11:42.808 "polls": 5977966, 00:11:42.808 "idle_polls": 5977966, 00:11:42.808 "completions": 0, 00:11:42.808 "requests": 0, 00:11:42.808 "request_latency": 0, 00:11:42.808 "pending_free_request": 0, 00:11:42.808 "pending_rdma_read": 0, 00:11:42.808 "pending_rdma_write": 0, 00:11:42.808 "pending_rdma_send": 0, 00:11:42.808 "total_send_wrs": 0, 00:11:42.808 "send_doorbell_updates": 0, 00:11:42.808 "total_recv_wrs": 4096, 00:11:42.808 "recv_doorbell_updates": 1 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 }, 00:11:42.808 { 00:11:42.808 "name": "nvmf_tgt_poll_group_001", 00:11:42.808 "admin_qpairs": 2, 00:11:42.808 "io_qpairs": 26, 00:11:42.808 "current_admin_qpairs": 0, 00:11:42.808 "current_io_qpairs": 0, 00:11:42.808 "pending_bdev_io": 0, 00:11:42.808 "completed_nvme_io": 163, 00:11:42.808 "transports": [ 00:11:42.808 { 00:11:42.808 "trtype": "RDMA", 00:11:42.808 "pending_data_buffer": 0, 00:11:42.808 "devices": [ 00:11:42.808 { 00:11:42.808 "name": "mlx5_0", 00:11:42.808 "polls": 5940367, 00:11:42.808 "idle_polls": 5939984, 00:11:42.808 "completions": 446, 00:11:42.808 "requests": 223, 00:11:42.808 "request_latency": 45593358, 00:11:42.808 "pending_free_request": 0, 00:11:42.808 "pending_rdma_read": 0, 00:11:42.808 "pending_rdma_write": 0, 00:11:42.808 "pending_rdma_send": 0, 00:11:42.808 "total_send_wrs": 390, 00:11:42.808 "send_doorbell_updates": 189, 00:11:42.808 "total_recv_wrs": 4319, 00:11:42.808 "recv_doorbell_updates": 190 00:11:42.808 }, 00:11:42.808 { 00:11:42.808 "name": "mlx5_1", 00:11:42.808 "polls": 5940367, 00:11:42.808 "idle_polls": 5940367, 00:11:42.808 "completions": 0, 00:11:42.808 "requests": 0, 00:11:42.808 "request_latency": 0, 00:11:42.808 "pending_free_request": 0, 00:11:42.808 "pending_rdma_read": 0, 00:11:42.808 "pending_rdma_write": 0, 00:11:42.808 "pending_rdma_send": 0, 00:11:42.808 "total_send_wrs": 0, 00:11:42.808 "send_doorbell_updates": 0, 00:11:42.808 "total_recv_wrs": 4096, 00:11:42.808 "recv_doorbell_updates": 1 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 }, 00:11:42.808 { 00:11:42.808 "name": "nvmf_tgt_poll_group_002", 00:11:42.808 "admin_qpairs": 1, 00:11:42.808 "io_qpairs": 26, 00:11:42.808 "current_admin_qpairs": 0, 00:11:42.808 "current_io_qpairs": 0, 00:11:42.808 "pending_bdev_io": 0, 00:11:42.808 "completed_nvme_io": 125, 00:11:42.808 "transports": [ 00:11:42.808 { 00:11:42.808 "trtype": "RDMA", 00:11:42.808 "pending_data_buffer": 0, 00:11:42.808 "devices": [ 00:11:42.808 { 00:11:42.808 "name": "mlx5_0", 00:11:42.808 "polls": 6038956, 00:11:42.808 "idle_polls": 6038682, 00:11:42.808 "completions": 313, 00:11:42.808 "requests": 156, 00:11:42.808 "request_latency": 33136942, 00:11:42.808 "pending_free_request": 0, 00:11:42.808 "pending_rdma_read": 0, 00:11:42.808 "pending_rdma_write": 0, 00:11:42.808 "pending_rdma_send": 0, 00:11:42.808 "total_send_wrs": 271, 00:11:42.808 "send_doorbell_updates": 133, 00:11:42.808 "total_recv_wrs": 4252, 00:11:42.808 "recv_doorbell_updates": 133 00:11:42.808 }, 00:11:42.808 { 00:11:42.808 "name": "mlx5_1", 00:11:42.808 "polls": 6038956, 00:11:42.808 "idle_polls": 6038956, 00:11:42.808 "completions": 0, 00:11:42.808 "requests": 0, 00:11:42.808 "request_latency": 0, 00:11:42.808 "pending_free_request": 0, 00:11:42.808 "pending_rdma_read": 0, 00:11:42.808 "pending_rdma_write": 0, 00:11:42.808 "pending_rdma_send": 0, 00:11:42.808 "total_send_wrs": 0, 00:11:42.808 "send_doorbell_updates": 0, 00:11:42.808 "total_recv_wrs": 4096, 00:11:42.808 "recv_doorbell_updates": 1 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 }, 00:11:42.808 { 00:11:42.808 "name": "nvmf_tgt_poll_group_003", 00:11:42.808 "admin_qpairs": 2, 00:11:42.808 "io_qpairs": 26, 00:11:42.808 "current_admin_qpairs": 0, 00:11:42.808 "current_io_qpairs": 0, 00:11:42.808 "pending_bdev_io": 0, 00:11:42.808 "completed_nvme_io": 126, 00:11:42.808 "transports": [ 00:11:42.808 { 00:11:42.808 "trtype": "RDMA", 00:11:42.808 "pending_data_buffer": 0, 00:11:42.808 "devices": [ 00:11:42.808 { 00:11:42.808 "name": "mlx5_0", 00:11:42.808 "polls": 4723739, 00:11:42.808 "idle_polls": 4723414, 00:11:42.808 "completions": 370, 00:11:42.808 "requests": 185, 00:11:42.808 "request_latency": 38150466, 00:11:42.808 "pending_free_request": 0, 00:11:42.808 "pending_rdma_read": 0, 00:11:42.808 "pending_rdma_write": 0, 00:11:42.808 "pending_rdma_send": 0, 00:11:42.808 "total_send_wrs": 315, 00:11:42.808 "send_doorbell_updates": 159, 00:11:42.808 "total_recv_wrs": 4281, 00:11:42.808 "recv_doorbell_updates": 160 00:11:42.808 }, 00:11:42.808 { 00:11:42.808 "name": "mlx5_1", 00:11:42.808 "polls": 4723739, 00:11:42.808 "idle_polls": 4723739, 00:11:42.808 "completions": 0, 00:11:42.808 "requests": 0, 00:11:42.808 "request_latency": 0, 00:11:42.808 "pending_free_request": 0, 00:11:42.808 "pending_rdma_read": 0, 00:11:42.808 "pending_rdma_write": 0, 00:11:42.808 "pending_rdma_send": 0, 00:11:42.808 "total_send_wrs": 0, 00:11:42.808 "send_doorbell_updates": 0, 00:11:42.808 "total_recv_wrs": 4096, 00:11:42.808 "recv_doorbell_updates": 1 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 } 00:11:42.808 ] 00:11:42.808 }' 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:42.808 17:36:20 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1332 > 0 )) 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 127662174 > 0 )) 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:11:42.808 rmmod nvme_rdma 00:11:42.808 rmmod nvme_fabrics 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:42.808 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 583861 ']' 00:11:42.809 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 583861 00:11:42.809 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 583861 ']' 00:11:42.809 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 583861 00:11:42.809 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:11:42.809 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.809 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 583861 00:11:43.067 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:43.067 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:43.067 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 583861' 00:11:43.067 killing process with pid 583861 00:11:43.067 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 583861 00:11:43.067 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 583861 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:11:43.326 00:11:43.326 real 0m56.624s 00:11:43.326 user 3m20.804s 00:11:43.326 sys 0m7.457s 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:43.326 ************************************ 00:11:43.326 END TEST nvmf_rpc 00:11:43.326 ************************************ 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:43.326 ************************************ 00:11:43.326 START TEST nvmf_invalid 00:11:43.326 ************************************ 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:11:43.326 * Looking for test storage... 00:11:43.326 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:11:43.326 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:43.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.585 --rc genhtml_branch_coverage=1 00:11:43.585 --rc genhtml_function_coverage=1 00:11:43.585 --rc genhtml_legend=1 00:11:43.585 --rc geninfo_all_blocks=1 00:11:43.585 --rc geninfo_unexecuted_blocks=1 00:11:43.585 00:11:43.585 ' 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:43.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.585 --rc genhtml_branch_coverage=1 00:11:43.585 --rc genhtml_function_coverage=1 00:11:43.585 --rc genhtml_legend=1 00:11:43.585 --rc geninfo_all_blocks=1 00:11:43.585 --rc geninfo_unexecuted_blocks=1 00:11:43.585 00:11:43.585 ' 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:43.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.585 --rc genhtml_branch_coverage=1 00:11:43.585 --rc genhtml_function_coverage=1 00:11:43.585 --rc genhtml_legend=1 00:11:43.585 --rc geninfo_all_blocks=1 00:11:43.585 --rc geninfo_unexecuted_blocks=1 00:11:43.585 00:11:43.585 ' 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:43.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.585 --rc genhtml_branch_coverage=1 00:11:43.585 --rc genhtml_function_coverage=1 00:11:43.585 --rc genhtml_legend=1 00:11:43.585 --rc geninfo_all_blocks=1 00:11:43.585 --rc geninfo_unexecuted_blocks=1 00:11:43.585 00:11:43.585 ' 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.585 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.586 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:43.586 17:36:21 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:50.148 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.148 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:50.148 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:50.148 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:50.148 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:50.148 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:50.148 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:50.148 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:11:50.149 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:11:50.149 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:11:50.149 Found net devices under 0000:18:00.0: mlx_0_0 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:11:50.149 Found net devices under 0000:18:00.1: mlx_0_1 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # rdma_device_init 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # uname 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe ib_cm 00:11:50.149 17:36:27 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe ib_core 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe ib_umad 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@70 -- # modprobe iw_cm 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@528 -- # allocate_nic_ips 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # get_rdma_if_list 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.149 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:11:50.150 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:50.150 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:11:50.150 altname enp24s0f0np0 00:11:50.150 altname ens785f0np0 00:11:50.150 inet 192.168.100.8/24 scope global mlx_0_0 00:11:50.150 valid_lft forever preferred_lft forever 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:11:50.150 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:50.150 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:11:50.150 altname enp24s0f1np1 00:11:50.150 altname ens785f1np1 00:11:50.150 inet 192.168.100.9/24 scope global mlx_0_1 00:11:50.150 valid_lft forever preferred_lft forever 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # get_rdma_if_list 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@108 -- # echo mlx_0_1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@109 -- # continue 2 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # awk '{print $4}' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # cut -d/ -f1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:11:50.150 192.168.100.9' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:11:50.150 192.168.100.9' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # head -n 1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:11:50.150 192.168.100.9' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # tail -n +2 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # head -n 1 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=594117 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 594117 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 594117 ']' 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:50.150 [2024-10-17 17:36:28.264295] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:11:50.150 [2024-10-17 17:36:28.264353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.150 [2024-10-17 17:36:28.337101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:50.150 [2024-10-17 17:36:28.382100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.150 [2024-10-17 17:36:28.382143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.150 [2024-10-17 17:36:28.382153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.150 [2024-10-17 17:36:28.382161] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.150 [2024-10-17 17:36:28.382168] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.150 [2024-10-17 17:36:28.383502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.150 [2024-10-17 17:36:28.383591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.150 [2024-10-17 17:36:28.383667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.150 [2024-10-17 17:36:28.383669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.150 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:50.408 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10669 00:11:50.408 [2024-10-17 17:36:28.711257] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:50.408 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:50.408 { 00:11:50.408 "nqn": "nqn.2016-06.io.spdk:cnode10669", 00:11:50.408 "tgt_name": "foobar", 00:11:50.408 "method": "nvmf_create_subsystem", 00:11:50.408 "req_id": 1 00:11:50.408 } 00:11:50.408 Got JSON-RPC error response 00:11:50.408 response: 00:11:50.408 { 00:11:50.408 "code": -32603, 00:11:50.408 "message": "Unable to find target foobar" 00:11:50.408 }' 00:11:50.408 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:50.408 { 00:11:50.408 "nqn": "nqn.2016-06.io.spdk:cnode10669", 00:11:50.408 "tgt_name": "foobar", 00:11:50.408 "method": "nvmf_create_subsystem", 00:11:50.408 "req_id": 1 00:11:50.408 } 00:11:50.408 Got JSON-RPC error response 00:11:50.408 response: 00:11:50.408 { 00:11:50.408 "code": -32603, 00:11:50.408 "message": "Unable to find target foobar" 00:11:50.408 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:50.408 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:50.408 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28836 00:11:50.665 [2024-10-17 17:36:28.915982] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28836: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:50.666 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:50.666 { 00:11:50.666 "nqn": "nqn.2016-06.io.spdk:cnode28836", 00:11:50.666 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:50.666 "method": "nvmf_create_subsystem", 00:11:50.666 "req_id": 1 00:11:50.666 } 00:11:50.666 Got JSON-RPC error response 00:11:50.666 response: 00:11:50.666 { 00:11:50.666 "code": -32602, 00:11:50.666 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:50.666 }' 00:11:50.666 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:50.666 { 00:11:50.666 "nqn": "nqn.2016-06.io.spdk:cnode28836", 00:11:50.666 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:50.666 "method": "nvmf_create_subsystem", 00:11:50.666 "req_id": 1 00:11:50.666 } 00:11:50.666 Got JSON-RPC error response 00:11:50.666 response: 00:11:50.666 { 00:11:50.666 "code": -32602, 00:11:50.666 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:50.666 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:50.666 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:50.666 17:36:28 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode21872 00:11:50.923 [2024-10-17 17:36:29.132697] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21872: invalid model number 'SPDK_Controller' 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:50.923 { 00:11:50.923 "nqn": "nqn.2016-06.io.spdk:cnode21872", 00:11:50.923 "model_number": "SPDK_Controller\u001f", 00:11:50.923 "method": "nvmf_create_subsystem", 00:11:50.923 "req_id": 1 00:11:50.923 } 00:11:50.923 Got JSON-RPC error response 00:11:50.923 response: 00:11:50.923 { 00:11:50.923 "code": -32602, 00:11:50.923 "message": "Invalid MN SPDK_Controller\u001f" 00:11:50.923 }' 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:50.923 { 00:11:50.923 "nqn": "nqn.2016-06.io.spdk:cnode21872", 00:11:50.923 "model_number": "SPDK_Controller\u001f", 00:11:50.923 "method": "nvmf_create_subsystem", 00:11:50.923 "req_id": 1 00:11:50.923 } 00:11:50.923 Got JSON-RPC error response 00:11:50.923 response: 00:11:50.923 { 00:11:50.923 "code": -32602, 00:11:50.923 "message": "Invalid MN SPDK_Controller\u001f" 00:11:50.923 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:50.923 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:50.924 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ c == \- ]] 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'c,;B}|&jE9A?H %tQj)P#' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'c,;B}|&jE9A?H %tQj)P#' nqn.2016-06.io.spdk:cnode11936 00:11:51.182 [2024-10-17 17:36:29.526042] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11936: invalid serial number 'c,;B}|&jE9A?H %tQj)P#' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:51.182 { 00:11:51.182 "nqn": "nqn.2016-06.io.spdk:cnode11936", 00:11:51.182 "serial_number": "c,;B}|&jE9A?H %tQj)P#", 00:11:51.182 "method": "nvmf_create_subsystem", 00:11:51.182 "req_id": 1 00:11:51.182 } 00:11:51.182 Got JSON-RPC error response 00:11:51.182 response: 00:11:51.182 { 00:11:51.182 "code": -32602, 00:11:51.182 "message": "Invalid SN c,;B}|&jE9A?H %tQj)P#" 00:11:51.182 }' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:51.182 { 00:11:51.182 "nqn": "nqn.2016-06.io.spdk:cnode11936", 00:11:51.182 "serial_number": "c,;B}|&jE9A?H %tQj)P#", 00:11:51.182 "method": "nvmf_create_subsystem", 00:11:51.182 "req_id": 1 00:11:51.182 } 00:11:51.182 Got JSON-RPC error response 00:11:51.182 response: 00:11:51.182 { 00:11:51.182 "code": -32602, 00:11:51.182 "message": "Invalid SN c,;B}|&jE9A?H %tQj)P#" 00:11:51.182 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:51.182 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.441 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.442 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:51.443 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:11:51.701 17:36:29 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '*H ver2_l ? ver1_l : ver2_l) )) 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:54.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.283 --rc genhtml_branch_coverage=1 00:11:54.283 --rc genhtml_function_coverage=1 00:11:54.283 --rc genhtml_legend=1 00:11:54.283 --rc geninfo_all_blocks=1 00:11:54.283 --rc geninfo_unexecuted_blocks=1 00:11:54.283 00:11:54.283 ' 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:54.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.283 --rc genhtml_branch_coverage=1 00:11:54.283 --rc genhtml_function_coverage=1 00:11:54.283 --rc genhtml_legend=1 00:11:54.283 --rc geninfo_all_blocks=1 00:11:54.283 --rc geninfo_unexecuted_blocks=1 00:11:54.283 00:11:54.283 ' 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:54.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.283 --rc genhtml_branch_coverage=1 00:11:54.283 --rc genhtml_function_coverage=1 00:11:54.283 --rc genhtml_legend=1 00:11:54.283 --rc geninfo_all_blocks=1 00:11:54.283 --rc geninfo_unexecuted_blocks=1 00:11:54.283 00:11:54.283 ' 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:54.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.283 --rc genhtml_branch_coverage=1 00:11:54.283 --rc genhtml_function_coverage=1 00:11:54.283 --rc genhtml_legend=1 00:11:54.283 --rc geninfo_all_blocks=1 00:11:54.283 --rc geninfo_unexecuted_blocks=1 00:11:54.283 00:11:54.283 ' 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.283 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.284 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.284 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:54.542 17:36:32 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:12:01.104 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:12:01.104 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.104 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:01.105 Found net devices under 0000:18:00.0: mlx_0_0 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:01.105 Found net devices under 0000:18:00.1: mlx_0_1 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # rdma_device_init 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # uname 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:01.105 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.105 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:12:01.105 altname enp24s0f0np0 00:12:01.105 altname ens785f0np0 00:12:01.105 inet 192.168.100.8/24 scope global mlx_0_0 00:12:01.105 valid_lft forever preferred_lft forever 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:01.105 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:01.105 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:12:01.105 altname enp24s0f1np1 00:12:01.105 altname ens785f1np1 00:12:01.105 inet 192.168.100.9/24 scope global mlx_0_1 00:12:01.105 valid_lft forever preferred_lft forever 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:01.105 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@109 -- # continue 2 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:01.106 192.168.100.9' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:01.106 192.168.100.9' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # head -n 1 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:01.106 192.168.100.9' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # tail -n +2 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # head -n 1 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=597629 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 597629 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 597629 ']' 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:01.106 17:36:38 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.106 [2024-10-17 17:36:38.978843] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:12:01.106 [2024-10-17 17:36:38.978909] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.106 [2024-10-17 17:36:39.052194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:01.106 [2024-10-17 17:36:39.098397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.106 [2024-10-17 17:36:39.098446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.106 [2024-10-17 17:36:39.098456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.106 [2024-10-17 17:36:39.098465] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.106 [2024-10-17 17:36:39.098472] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.106 [2024-10-17 17:36:39.099805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.106 [2024-10-17 17:36:39.099881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.106 [2024-10-17 17:36:39.099883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.106 [2024-10-17 17:36:39.278764] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x13cdab0/0x13d1fa0) succeed. 00:12:01.106 [2024-10-17 17:36:39.289196] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x13cf0a0/0x1413640) succeed. 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.106 [2024-10-17 17:36:39.410773] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.106 NULL1 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=597794 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.106 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.107 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.364 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.621 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.621 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:01.621 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.621 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.621 17:36:39 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.878 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.878 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:01.878 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.878 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.878 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.135 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.135 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:02.135 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.135 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.135 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.699 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.699 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:02.699 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.699 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.699 17:36:40 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.956 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.956 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:02.956 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.956 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.956 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.214 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.214 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:03.214 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.214 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.214 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.472 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.472 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:03.472 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.472 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.472 17:36:41 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.036 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.036 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:04.036 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.036 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.036 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.294 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.294 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:04.294 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.294 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.294 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.552 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.552 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:04.552 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.552 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.552 17:36:42 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.809 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.809 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:04.809 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.809 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.809 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.066 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.066 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:05.066 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.066 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.066 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.631 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.631 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:05.631 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.631 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.631 17:36:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.888 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.888 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:05.888 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.888 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.888 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.144 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.144 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:06.144 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.144 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.144 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.401 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.401 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:06.401 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.401 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.402 17:36:44 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.967 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.967 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:06.967 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.967 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.967 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.224 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.224 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:07.224 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.224 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.224 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.483 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.483 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:07.483 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.483 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.483 17:36:45 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.741 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.741 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:07.741 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.741 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.742 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.000 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.000 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:08.000 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.000 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.000 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.565 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.565 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:08.565 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.565 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.565 17:36:46 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.823 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.823 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:08.823 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.823 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.823 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.081 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.081 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:09.081 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.081 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.081 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.338 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.338 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:09.338 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.338 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.339 17:36:47 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.904 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.904 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:09.904 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.904 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.904 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.163 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.163 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:10.163 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.163 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.163 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.421 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.421 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:10.421 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.421 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.421 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.679 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.679 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:10.679 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.679 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.679 17:36:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.937 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.937 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:10.937 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.937 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.937 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.194 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 597794 00:12:11.452 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (597794) - No such process 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 597794 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:11.452 rmmod nvme_rdma 00:12:11.452 rmmod nvme_fabrics 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 597629 ']' 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 597629 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 597629 ']' 00:12:11.452 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 597629 00:12:11.453 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:12:11.453 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:11.453 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 597629 00:12:11.453 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:11.453 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:11.453 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 597629' 00:12:11.453 killing process with pid 597629 00:12:11.453 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 597629 00:12:11.453 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 597629 00:12:11.711 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:11.711 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:11.711 00:12:11.711 real 0m17.502s 00:12:11.711 user 0m40.368s 00:12:11.711 sys 0m7.466s 00:12:11.711 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.711 17:36:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.711 ************************************ 00:12:11.711 END TEST nvmf_connect_stress 00:12:11.711 ************************************ 00:12:11.711 17:36:50 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:11.711 17:36:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:11.711 17:36:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.711 17:36:50 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:11.711 ************************************ 00:12:11.711 START TEST nvmf_fused_ordering 00:12:11.711 ************************************ 00:12:11.711 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:12:11.969 * Looking for test storage... 00:12:11.970 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:11.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.970 --rc genhtml_branch_coverage=1 00:12:11.970 --rc genhtml_function_coverage=1 00:12:11.970 --rc genhtml_legend=1 00:12:11.970 --rc geninfo_all_blocks=1 00:12:11.970 --rc geninfo_unexecuted_blocks=1 00:12:11.970 00:12:11.970 ' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:11.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.970 --rc genhtml_branch_coverage=1 00:12:11.970 --rc genhtml_function_coverage=1 00:12:11.970 --rc genhtml_legend=1 00:12:11.970 --rc geninfo_all_blocks=1 00:12:11.970 --rc geninfo_unexecuted_blocks=1 00:12:11.970 00:12:11.970 ' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:11.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.970 --rc genhtml_branch_coverage=1 00:12:11.970 --rc genhtml_function_coverage=1 00:12:11.970 --rc genhtml_legend=1 00:12:11.970 --rc geninfo_all_blocks=1 00:12:11.970 --rc geninfo_unexecuted_blocks=1 00:12:11.970 00:12:11.970 ' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:11.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:11.970 --rc genhtml_branch_coverage=1 00:12:11.970 --rc genhtml_function_coverage=1 00:12:11.970 --rc genhtml_legend=1 00:12:11.970 --rc geninfo_all_blocks=1 00:12:11.970 --rc geninfo_unexecuted_blocks=1 00:12:11.970 00:12:11.970 ' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:11.970 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:11.970 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:11.971 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:11.971 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:11.971 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.971 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.971 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.971 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:11.971 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:11.971 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:11.971 17:36:50 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.634 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:12:18.635 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:12:18.635 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:18.635 Found net devices under 0000:18:00.0: mlx_0_0 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:18.635 Found net devices under 0000:18:00.1: mlx_0_1 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # rdma_device_init 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # uname 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:18.635 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.635 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:12:18.635 altname enp24s0f0np0 00:12:18.635 altname ens785f0np0 00:12:18.635 inet 192.168.100.8/24 scope global mlx_0_0 00:12:18.635 valid_lft forever preferred_lft forever 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:18.635 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:18.635 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:12:18.635 altname enp24s0f1np1 00:12:18.635 altname ens785f1np1 00:12:18.635 inet 192.168.100.9/24 scope global mlx_0_1 00:12:18.635 valid_lft forever preferred_lft forever 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:18.635 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@109 -- # continue 2 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:18.636 192.168.100.9' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # head -n 1 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:18.636 192.168.100.9' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:18.636 192.168.100.9' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # tail -n +2 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # head -n 1 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=602042 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 602042 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 602042 ']' 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.636 17:36:56 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:18.636 [2024-10-17 17:36:56.879498] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:12:18.636 [2024-10-17 17:36:56.879566] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:18.636 [2024-10-17 17:36:56.953003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.636 [2024-10-17 17:36:56.995464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:18.636 [2024-10-17 17:36:56.995507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:18.636 [2024-10-17 17:36:56.995517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:18.636 [2024-10-17 17:36:56.995525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:18.636 [2024-10-17 17:36:56.995532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:18.636 [2024-10-17 17:36:56.995979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.895 [2024-10-17 17:36:57.158223] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e463b0/0x1e4a8a0) succeed. 00:12:18.895 [2024-10-17 17:36:57.167565] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e47860/0x1e8bf40) succeed. 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.895 [2024-10-17 17:36:57.213435] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.895 NULL1 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.895 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:18.895 [2024-10-17 17:36:57.268975] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:12:18.895 [2024-10-17 17:36:57.269013] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid602229 ] 00:12:19.153 Attached to nqn.2016-06.io.spdk:cnode1 00:12:19.153 Namespace ID: 1 size: 1GB 00:12:19.153 fused_ordering(0) 00:12:19.153 fused_ordering(1) 00:12:19.153 fused_ordering(2) 00:12:19.153 fused_ordering(3) 00:12:19.153 fused_ordering(4) 00:12:19.153 fused_ordering(5) 00:12:19.153 fused_ordering(6) 00:12:19.153 fused_ordering(7) 00:12:19.153 fused_ordering(8) 00:12:19.153 fused_ordering(9) 00:12:19.153 fused_ordering(10) 00:12:19.153 fused_ordering(11) 00:12:19.153 fused_ordering(12) 00:12:19.153 fused_ordering(13) 00:12:19.153 fused_ordering(14) 00:12:19.153 fused_ordering(15) 00:12:19.153 fused_ordering(16) 00:12:19.153 fused_ordering(17) 00:12:19.153 fused_ordering(18) 00:12:19.153 fused_ordering(19) 00:12:19.153 fused_ordering(20) 00:12:19.153 fused_ordering(21) 00:12:19.153 fused_ordering(22) 00:12:19.153 fused_ordering(23) 00:12:19.153 fused_ordering(24) 00:12:19.153 fused_ordering(25) 00:12:19.153 fused_ordering(26) 00:12:19.153 fused_ordering(27) 00:12:19.153 fused_ordering(28) 00:12:19.153 fused_ordering(29) 00:12:19.153 fused_ordering(30) 00:12:19.153 fused_ordering(31) 00:12:19.153 fused_ordering(32) 00:12:19.153 fused_ordering(33) 00:12:19.153 fused_ordering(34) 00:12:19.153 fused_ordering(35) 00:12:19.153 fused_ordering(36) 00:12:19.153 fused_ordering(37) 00:12:19.153 fused_ordering(38) 00:12:19.153 fused_ordering(39) 00:12:19.153 fused_ordering(40) 00:12:19.153 fused_ordering(41) 00:12:19.153 fused_ordering(42) 00:12:19.153 fused_ordering(43) 00:12:19.153 fused_ordering(44) 00:12:19.153 fused_ordering(45) 00:12:19.153 fused_ordering(46) 00:12:19.153 fused_ordering(47) 00:12:19.153 fused_ordering(48) 00:12:19.153 fused_ordering(49) 00:12:19.153 fused_ordering(50) 00:12:19.153 fused_ordering(51) 00:12:19.153 fused_ordering(52) 00:12:19.153 fused_ordering(53) 00:12:19.153 fused_ordering(54) 00:12:19.153 fused_ordering(55) 00:12:19.153 fused_ordering(56) 00:12:19.153 fused_ordering(57) 00:12:19.153 fused_ordering(58) 00:12:19.153 fused_ordering(59) 00:12:19.153 fused_ordering(60) 00:12:19.153 fused_ordering(61) 00:12:19.153 fused_ordering(62) 00:12:19.153 fused_ordering(63) 00:12:19.153 fused_ordering(64) 00:12:19.153 fused_ordering(65) 00:12:19.153 fused_ordering(66) 00:12:19.153 fused_ordering(67) 00:12:19.153 fused_ordering(68) 00:12:19.153 fused_ordering(69) 00:12:19.153 fused_ordering(70) 00:12:19.153 fused_ordering(71) 00:12:19.153 fused_ordering(72) 00:12:19.153 fused_ordering(73) 00:12:19.153 fused_ordering(74) 00:12:19.153 fused_ordering(75) 00:12:19.153 fused_ordering(76) 00:12:19.153 fused_ordering(77) 00:12:19.153 fused_ordering(78) 00:12:19.153 fused_ordering(79) 00:12:19.153 fused_ordering(80) 00:12:19.153 fused_ordering(81) 00:12:19.153 fused_ordering(82) 00:12:19.153 fused_ordering(83) 00:12:19.153 fused_ordering(84) 00:12:19.153 fused_ordering(85) 00:12:19.153 fused_ordering(86) 00:12:19.153 fused_ordering(87) 00:12:19.153 fused_ordering(88) 00:12:19.153 fused_ordering(89) 00:12:19.153 fused_ordering(90) 00:12:19.153 fused_ordering(91) 00:12:19.153 fused_ordering(92) 00:12:19.153 fused_ordering(93) 00:12:19.153 fused_ordering(94) 00:12:19.153 fused_ordering(95) 00:12:19.153 fused_ordering(96) 00:12:19.153 fused_ordering(97) 00:12:19.153 fused_ordering(98) 00:12:19.153 fused_ordering(99) 00:12:19.153 fused_ordering(100) 00:12:19.153 fused_ordering(101) 00:12:19.153 fused_ordering(102) 00:12:19.153 fused_ordering(103) 00:12:19.153 fused_ordering(104) 00:12:19.153 fused_ordering(105) 00:12:19.153 fused_ordering(106) 00:12:19.153 fused_ordering(107) 00:12:19.153 fused_ordering(108) 00:12:19.153 fused_ordering(109) 00:12:19.153 fused_ordering(110) 00:12:19.153 fused_ordering(111) 00:12:19.153 fused_ordering(112) 00:12:19.153 fused_ordering(113) 00:12:19.153 fused_ordering(114) 00:12:19.153 fused_ordering(115) 00:12:19.153 fused_ordering(116) 00:12:19.153 fused_ordering(117) 00:12:19.153 fused_ordering(118) 00:12:19.153 fused_ordering(119) 00:12:19.153 fused_ordering(120) 00:12:19.153 fused_ordering(121) 00:12:19.153 fused_ordering(122) 00:12:19.153 fused_ordering(123) 00:12:19.153 fused_ordering(124) 00:12:19.153 fused_ordering(125) 00:12:19.153 fused_ordering(126) 00:12:19.153 fused_ordering(127) 00:12:19.153 fused_ordering(128) 00:12:19.153 fused_ordering(129) 00:12:19.153 fused_ordering(130) 00:12:19.153 fused_ordering(131) 00:12:19.153 fused_ordering(132) 00:12:19.153 fused_ordering(133) 00:12:19.153 fused_ordering(134) 00:12:19.153 fused_ordering(135) 00:12:19.153 fused_ordering(136) 00:12:19.153 fused_ordering(137) 00:12:19.153 fused_ordering(138) 00:12:19.153 fused_ordering(139) 00:12:19.153 fused_ordering(140) 00:12:19.153 fused_ordering(141) 00:12:19.153 fused_ordering(142) 00:12:19.153 fused_ordering(143) 00:12:19.153 fused_ordering(144) 00:12:19.153 fused_ordering(145) 00:12:19.153 fused_ordering(146) 00:12:19.153 fused_ordering(147) 00:12:19.153 fused_ordering(148) 00:12:19.153 fused_ordering(149) 00:12:19.153 fused_ordering(150) 00:12:19.153 fused_ordering(151) 00:12:19.153 fused_ordering(152) 00:12:19.153 fused_ordering(153) 00:12:19.153 fused_ordering(154) 00:12:19.153 fused_ordering(155) 00:12:19.153 fused_ordering(156) 00:12:19.153 fused_ordering(157) 00:12:19.153 fused_ordering(158) 00:12:19.153 fused_ordering(159) 00:12:19.153 fused_ordering(160) 00:12:19.153 fused_ordering(161) 00:12:19.153 fused_ordering(162) 00:12:19.153 fused_ordering(163) 00:12:19.153 fused_ordering(164) 00:12:19.153 fused_ordering(165) 00:12:19.153 fused_ordering(166) 00:12:19.153 fused_ordering(167) 00:12:19.153 fused_ordering(168) 00:12:19.153 fused_ordering(169) 00:12:19.153 fused_ordering(170) 00:12:19.153 fused_ordering(171) 00:12:19.153 fused_ordering(172) 00:12:19.153 fused_ordering(173) 00:12:19.153 fused_ordering(174) 00:12:19.153 fused_ordering(175) 00:12:19.153 fused_ordering(176) 00:12:19.153 fused_ordering(177) 00:12:19.153 fused_ordering(178) 00:12:19.153 fused_ordering(179) 00:12:19.153 fused_ordering(180) 00:12:19.153 fused_ordering(181) 00:12:19.153 fused_ordering(182) 00:12:19.153 fused_ordering(183) 00:12:19.153 fused_ordering(184) 00:12:19.153 fused_ordering(185) 00:12:19.153 fused_ordering(186) 00:12:19.153 fused_ordering(187) 00:12:19.153 fused_ordering(188) 00:12:19.153 fused_ordering(189) 00:12:19.153 fused_ordering(190) 00:12:19.153 fused_ordering(191) 00:12:19.154 fused_ordering(192) 00:12:19.154 fused_ordering(193) 00:12:19.154 fused_ordering(194) 00:12:19.154 fused_ordering(195) 00:12:19.154 fused_ordering(196) 00:12:19.154 fused_ordering(197) 00:12:19.154 fused_ordering(198) 00:12:19.154 fused_ordering(199) 00:12:19.154 fused_ordering(200) 00:12:19.154 fused_ordering(201) 00:12:19.154 fused_ordering(202) 00:12:19.154 fused_ordering(203) 00:12:19.154 fused_ordering(204) 00:12:19.154 fused_ordering(205) 00:12:19.154 fused_ordering(206) 00:12:19.154 fused_ordering(207) 00:12:19.154 fused_ordering(208) 00:12:19.154 fused_ordering(209) 00:12:19.154 fused_ordering(210) 00:12:19.154 fused_ordering(211) 00:12:19.154 fused_ordering(212) 00:12:19.154 fused_ordering(213) 00:12:19.154 fused_ordering(214) 00:12:19.154 fused_ordering(215) 00:12:19.154 fused_ordering(216) 00:12:19.154 fused_ordering(217) 00:12:19.154 fused_ordering(218) 00:12:19.154 fused_ordering(219) 00:12:19.154 fused_ordering(220) 00:12:19.154 fused_ordering(221) 00:12:19.154 fused_ordering(222) 00:12:19.154 fused_ordering(223) 00:12:19.154 fused_ordering(224) 00:12:19.154 fused_ordering(225) 00:12:19.154 fused_ordering(226) 00:12:19.154 fused_ordering(227) 00:12:19.154 fused_ordering(228) 00:12:19.154 fused_ordering(229) 00:12:19.154 fused_ordering(230) 00:12:19.154 fused_ordering(231) 00:12:19.154 fused_ordering(232) 00:12:19.154 fused_ordering(233) 00:12:19.154 fused_ordering(234) 00:12:19.154 fused_ordering(235) 00:12:19.154 fused_ordering(236) 00:12:19.154 fused_ordering(237) 00:12:19.154 fused_ordering(238) 00:12:19.154 fused_ordering(239) 00:12:19.154 fused_ordering(240) 00:12:19.154 fused_ordering(241) 00:12:19.154 fused_ordering(242) 00:12:19.154 fused_ordering(243) 00:12:19.154 fused_ordering(244) 00:12:19.154 fused_ordering(245) 00:12:19.154 fused_ordering(246) 00:12:19.154 fused_ordering(247) 00:12:19.154 fused_ordering(248) 00:12:19.154 fused_ordering(249) 00:12:19.154 fused_ordering(250) 00:12:19.154 fused_ordering(251) 00:12:19.154 fused_ordering(252) 00:12:19.154 fused_ordering(253) 00:12:19.154 fused_ordering(254) 00:12:19.154 fused_ordering(255) 00:12:19.154 fused_ordering(256) 00:12:19.154 fused_ordering(257) 00:12:19.154 fused_ordering(258) 00:12:19.154 fused_ordering(259) 00:12:19.154 fused_ordering(260) 00:12:19.154 fused_ordering(261) 00:12:19.154 fused_ordering(262) 00:12:19.154 fused_ordering(263) 00:12:19.154 fused_ordering(264) 00:12:19.154 fused_ordering(265) 00:12:19.154 fused_ordering(266) 00:12:19.154 fused_ordering(267) 00:12:19.154 fused_ordering(268) 00:12:19.154 fused_ordering(269) 00:12:19.154 fused_ordering(270) 00:12:19.154 fused_ordering(271) 00:12:19.154 fused_ordering(272) 00:12:19.154 fused_ordering(273) 00:12:19.154 fused_ordering(274) 00:12:19.154 fused_ordering(275) 00:12:19.154 fused_ordering(276) 00:12:19.154 fused_ordering(277) 00:12:19.154 fused_ordering(278) 00:12:19.154 fused_ordering(279) 00:12:19.154 fused_ordering(280) 00:12:19.154 fused_ordering(281) 00:12:19.154 fused_ordering(282) 00:12:19.154 fused_ordering(283) 00:12:19.154 fused_ordering(284) 00:12:19.154 fused_ordering(285) 00:12:19.154 fused_ordering(286) 00:12:19.154 fused_ordering(287) 00:12:19.154 fused_ordering(288) 00:12:19.154 fused_ordering(289) 00:12:19.154 fused_ordering(290) 00:12:19.154 fused_ordering(291) 00:12:19.154 fused_ordering(292) 00:12:19.154 fused_ordering(293) 00:12:19.154 fused_ordering(294) 00:12:19.154 fused_ordering(295) 00:12:19.154 fused_ordering(296) 00:12:19.154 fused_ordering(297) 00:12:19.154 fused_ordering(298) 00:12:19.154 fused_ordering(299) 00:12:19.154 fused_ordering(300) 00:12:19.154 fused_ordering(301) 00:12:19.154 fused_ordering(302) 00:12:19.154 fused_ordering(303) 00:12:19.154 fused_ordering(304) 00:12:19.154 fused_ordering(305) 00:12:19.154 fused_ordering(306) 00:12:19.154 fused_ordering(307) 00:12:19.154 fused_ordering(308) 00:12:19.154 fused_ordering(309) 00:12:19.154 fused_ordering(310) 00:12:19.154 fused_ordering(311) 00:12:19.154 fused_ordering(312) 00:12:19.154 fused_ordering(313) 00:12:19.154 fused_ordering(314) 00:12:19.154 fused_ordering(315) 00:12:19.154 fused_ordering(316) 00:12:19.154 fused_ordering(317) 00:12:19.154 fused_ordering(318) 00:12:19.154 fused_ordering(319) 00:12:19.154 fused_ordering(320) 00:12:19.154 fused_ordering(321) 00:12:19.154 fused_ordering(322) 00:12:19.154 fused_ordering(323) 00:12:19.154 fused_ordering(324) 00:12:19.154 fused_ordering(325) 00:12:19.154 fused_ordering(326) 00:12:19.154 fused_ordering(327) 00:12:19.154 fused_ordering(328) 00:12:19.154 fused_ordering(329) 00:12:19.154 fused_ordering(330) 00:12:19.154 fused_ordering(331) 00:12:19.154 fused_ordering(332) 00:12:19.154 fused_ordering(333) 00:12:19.154 fused_ordering(334) 00:12:19.154 fused_ordering(335) 00:12:19.154 fused_ordering(336) 00:12:19.154 fused_ordering(337) 00:12:19.154 fused_ordering(338) 00:12:19.154 fused_ordering(339) 00:12:19.154 fused_ordering(340) 00:12:19.154 fused_ordering(341) 00:12:19.154 fused_ordering(342) 00:12:19.154 fused_ordering(343) 00:12:19.154 fused_ordering(344) 00:12:19.154 fused_ordering(345) 00:12:19.154 fused_ordering(346) 00:12:19.154 fused_ordering(347) 00:12:19.154 fused_ordering(348) 00:12:19.154 fused_ordering(349) 00:12:19.154 fused_ordering(350) 00:12:19.154 fused_ordering(351) 00:12:19.154 fused_ordering(352) 00:12:19.154 fused_ordering(353) 00:12:19.154 fused_ordering(354) 00:12:19.154 fused_ordering(355) 00:12:19.154 fused_ordering(356) 00:12:19.154 fused_ordering(357) 00:12:19.154 fused_ordering(358) 00:12:19.154 fused_ordering(359) 00:12:19.154 fused_ordering(360) 00:12:19.154 fused_ordering(361) 00:12:19.154 fused_ordering(362) 00:12:19.154 fused_ordering(363) 00:12:19.154 fused_ordering(364) 00:12:19.154 fused_ordering(365) 00:12:19.154 fused_ordering(366) 00:12:19.154 fused_ordering(367) 00:12:19.154 fused_ordering(368) 00:12:19.154 fused_ordering(369) 00:12:19.154 fused_ordering(370) 00:12:19.154 fused_ordering(371) 00:12:19.154 fused_ordering(372) 00:12:19.154 fused_ordering(373) 00:12:19.154 fused_ordering(374) 00:12:19.154 fused_ordering(375) 00:12:19.154 fused_ordering(376) 00:12:19.154 fused_ordering(377) 00:12:19.154 fused_ordering(378) 00:12:19.154 fused_ordering(379) 00:12:19.154 fused_ordering(380) 00:12:19.154 fused_ordering(381) 00:12:19.154 fused_ordering(382) 00:12:19.154 fused_ordering(383) 00:12:19.154 fused_ordering(384) 00:12:19.154 fused_ordering(385) 00:12:19.154 fused_ordering(386) 00:12:19.154 fused_ordering(387) 00:12:19.154 fused_ordering(388) 00:12:19.154 fused_ordering(389) 00:12:19.154 fused_ordering(390) 00:12:19.154 fused_ordering(391) 00:12:19.154 fused_ordering(392) 00:12:19.154 fused_ordering(393) 00:12:19.154 fused_ordering(394) 00:12:19.154 fused_ordering(395) 00:12:19.154 fused_ordering(396) 00:12:19.154 fused_ordering(397) 00:12:19.154 fused_ordering(398) 00:12:19.154 fused_ordering(399) 00:12:19.154 fused_ordering(400) 00:12:19.154 fused_ordering(401) 00:12:19.154 fused_ordering(402) 00:12:19.154 fused_ordering(403) 00:12:19.154 fused_ordering(404) 00:12:19.154 fused_ordering(405) 00:12:19.154 fused_ordering(406) 00:12:19.154 fused_ordering(407) 00:12:19.154 fused_ordering(408) 00:12:19.154 fused_ordering(409) 00:12:19.154 fused_ordering(410) 00:12:19.413 fused_ordering(411) 00:12:19.413 fused_ordering(412) 00:12:19.413 fused_ordering(413) 00:12:19.413 fused_ordering(414) 00:12:19.413 fused_ordering(415) 00:12:19.413 fused_ordering(416) 00:12:19.413 fused_ordering(417) 00:12:19.413 fused_ordering(418) 00:12:19.413 fused_ordering(419) 00:12:19.413 fused_ordering(420) 00:12:19.413 fused_ordering(421) 00:12:19.413 fused_ordering(422) 00:12:19.413 fused_ordering(423) 00:12:19.413 fused_ordering(424) 00:12:19.413 fused_ordering(425) 00:12:19.413 fused_ordering(426) 00:12:19.413 fused_ordering(427) 00:12:19.413 fused_ordering(428) 00:12:19.413 fused_ordering(429) 00:12:19.413 fused_ordering(430) 00:12:19.413 fused_ordering(431) 00:12:19.413 fused_ordering(432) 00:12:19.413 fused_ordering(433) 00:12:19.413 fused_ordering(434) 00:12:19.413 fused_ordering(435) 00:12:19.413 fused_ordering(436) 00:12:19.413 fused_ordering(437) 00:12:19.413 fused_ordering(438) 00:12:19.413 fused_ordering(439) 00:12:19.413 fused_ordering(440) 00:12:19.413 fused_ordering(441) 00:12:19.413 fused_ordering(442) 00:12:19.413 fused_ordering(443) 00:12:19.413 fused_ordering(444) 00:12:19.413 fused_ordering(445) 00:12:19.413 fused_ordering(446) 00:12:19.413 fused_ordering(447) 00:12:19.413 fused_ordering(448) 00:12:19.413 fused_ordering(449) 00:12:19.413 fused_ordering(450) 00:12:19.413 fused_ordering(451) 00:12:19.413 fused_ordering(452) 00:12:19.413 fused_ordering(453) 00:12:19.413 fused_ordering(454) 00:12:19.413 fused_ordering(455) 00:12:19.413 fused_ordering(456) 00:12:19.413 fused_ordering(457) 00:12:19.413 fused_ordering(458) 00:12:19.413 fused_ordering(459) 00:12:19.413 fused_ordering(460) 00:12:19.414 fused_ordering(461) 00:12:19.414 fused_ordering(462) 00:12:19.414 fused_ordering(463) 00:12:19.414 fused_ordering(464) 00:12:19.414 fused_ordering(465) 00:12:19.414 fused_ordering(466) 00:12:19.414 fused_ordering(467) 00:12:19.414 fused_ordering(468) 00:12:19.414 fused_ordering(469) 00:12:19.414 fused_ordering(470) 00:12:19.414 fused_ordering(471) 00:12:19.414 fused_ordering(472) 00:12:19.414 fused_ordering(473) 00:12:19.414 fused_ordering(474) 00:12:19.414 fused_ordering(475) 00:12:19.414 fused_ordering(476) 00:12:19.414 fused_ordering(477) 00:12:19.414 fused_ordering(478) 00:12:19.414 fused_ordering(479) 00:12:19.414 fused_ordering(480) 00:12:19.414 fused_ordering(481) 00:12:19.414 fused_ordering(482) 00:12:19.414 fused_ordering(483) 00:12:19.414 fused_ordering(484) 00:12:19.414 fused_ordering(485) 00:12:19.414 fused_ordering(486) 00:12:19.414 fused_ordering(487) 00:12:19.414 fused_ordering(488) 00:12:19.414 fused_ordering(489) 00:12:19.414 fused_ordering(490) 00:12:19.414 fused_ordering(491) 00:12:19.414 fused_ordering(492) 00:12:19.414 fused_ordering(493) 00:12:19.414 fused_ordering(494) 00:12:19.414 fused_ordering(495) 00:12:19.414 fused_ordering(496) 00:12:19.414 fused_ordering(497) 00:12:19.414 fused_ordering(498) 00:12:19.414 fused_ordering(499) 00:12:19.414 fused_ordering(500) 00:12:19.414 fused_ordering(501) 00:12:19.414 fused_ordering(502) 00:12:19.414 fused_ordering(503) 00:12:19.414 fused_ordering(504) 00:12:19.414 fused_ordering(505) 00:12:19.414 fused_ordering(506) 00:12:19.414 fused_ordering(507) 00:12:19.414 fused_ordering(508) 00:12:19.414 fused_ordering(509) 00:12:19.414 fused_ordering(510) 00:12:19.414 fused_ordering(511) 00:12:19.414 fused_ordering(512) 00:12:19.414 fused_ordering(513) 00:12:19.414 fused_ordering(514) 00:12:19.414 fused_ordering(515) 00:12:19.414 fused_ordering(516) 00:12:19.414 fused_ordering(517) 00:12:19.414 fused_ordering(518) 00:12:19.414 fused_ordering(519) 00:12:19.414 fused_ordering(520) 00:12:19.414 fused_ordering(521) 00:12:19.414 fused_ordering(522) 00:12:19.414 fused_ordering(523) 00:12:19.414 fused_ordering(524) 00:12:19.414 fused_ordering(525) 00:12:19.414 fused_ordering(526) 00:12:19.414 fused_ordering(527) 00:12:19.414 fused_ordering(528) 00:12:19.414 fused_ordering(529) 00:12:19.414 fused_ordering(530) 00:12:19.414 fused_ordering(531) 00:12:19.414 fused_ordering(532) 00:12:19.414 fused_ordering(533) 00:12:19.414 fused_ordering(534) 00:12:19.414 fused_ordering(535) 00:12:19.414 fused_ordering(536) 00:12:19.414 fused_ordering(537) 00:12:19.414 fused_ordering(538) 00:12:19.414 fused_ordering(539) 00:12:19.414 fused_ordering(540) 00:12:19.414 fused_ordering(541) 00:12:19.414 fused_ordering(542) 00:12:19.414 fused_ordering(543) 00:12:19.414 fused_ordering(544) 00:12:19.414 fused_ordering(545) 00:12:19.414 fused_ordering(546) 00:12:19.414 fused_ordering(547) 00:12:19.414 fused_ordering(548) 00:12:19.414 fused_ordering(549) 00:12:19.414 fused_ordering(550) 00:12:19.414 fused_ordering(551) 00:12:19.414 fused_ordering(552) 00:12:19.414 fused_ordering(553) 00:12:19.414 fused_ordering(554) 00:12:19.414 fused_ordering(555) 00:12:19.414 fused_ordering(556) 00:12:19.414 fused_ordering(557) 00:12:19.414 fused_ordering(558) 00:12:19.414 fused_ordering(559) 00:12:19.414 fused_ordering(560) 00:12:19.414 fused_ordering(561) 00:12:19.414 fused_ordering(562) 00:12:19.414 fused_ordering(563) 00:12:19.414 fused_ordering(564) 00:12:19.414 fused_ordering(565) 00:12:19.414 fused_ordering(566) 00:12:19.414 fused_ordering(567) 00:12:19.414 fused_ordering(568) 00:12:19.414 fused_ordering(569) 00:12:19.414 fused_ordering(570) 00:12:19.414 fused_ordering(571) 00:12:19.414 fused_ordering(572) 00:12:19.414 fused_ordering(573) 00:12:19.414 fused_ordering(574) 00:12:19.414 fused_ordering(575) 00:12:19.414 fused_ordering(576) 00:12:19.414 fused_ordering(577) 00:12:19.414 fused_ordering(578) 00:12:19.414 fused_ordering(579) 00:12:19.414 fused_ordering(580) 00:12:19.414 fused_ordering(581) 00:12:19.414 fused_ordering(582) 00:12:19.414 fused_ordering(583) 00:12:19.414 fused_ordering(584) 00:12:19.414 fused_ordering(585) 00:12:19.414 fused_ordering(586) 00:12:19.414 fused_ordering(587) 00:12:19.414 fused_ordering(588) 00:12:19.414 fused_ordering(589) 00:12:19.414 fused_ordering(590) 00:12:19.414 fused_ordering(591) 00:12:19.414 fused_ordering(592) 00:12:19.414 fused_ordering(593) 00:12:19.414 fused_ordering(594) 00:12:19.414 fused_ordering(595) 00:12:19.414 fused_ordering(596) 00:12:19.414 fused_ordering(597) 00:12:19.414 fused_ordering(598) 00:12:19.414 fused_ordering(599) 00:12:19.414 fused_ordering(600) 00:12:19.414 fused_ordering(601) 00:12:19.414 fused_ordering(602) 00:12:19.414 fused_ordering(603) 00:12:19.414 fused_ordering(604) 00:12:19.414 fused_ordering(605) 00:12:19.414 fused_ordering(606) 00:12:19.414 fused_ordering(607) 00:12:19.414 fused_ordering(608) 00:12:19.414 fused_ordering(609) 00:12:19.414 fused_ordering(610) 00:12:19.414 fused_ordering(611) 00:12:19.414 fused_ordering(612) 00:12:19.414 fused_ordering(613) 00:12:19.414 fused_ordering(614) 00:12:19.414 fused_ordering(615) 00:12:19.414 fused_ordering(616) 00:12:19.414 fused_ordering(617) 00:12:19.414 fused_ordering(618) 00:12:19.414 fused_ordering(619) 00:12:19.414 fused_ordering(620) 00:12:19.414 fused_ordering(621) 00:12:19.414 fused_ordering(622) 00:12:19.414 fused_ordering(623) 00:12:19.414 fused_ordering(624) 00:12:19.414 fused_ordering(625) 00:12:19.414 fused_ordering(626) 00:12:19.414 fused_ordering(627) 00:12:19.414 fused_ordering(628) 00:12:19.414 fused_ordering(629) 00:12:19.414 fused_ordering(630) 00:12:19.414 fused_ordering(631) 00:12:19.414 fused_ordering(632) 00:12:19.414 fused_ordering(633) 00:12:19.414 fused_ordering(634) 00:12:19.414 fused_ordering(635) 00:12:19.414 fused_ordering(636) 00:12:19.414 fused_ordering(637) 00:12:19.414 fused_ordering(638) 00:12:19.414 fused_ordering(639) 00:12:19.414 fused_ordering(640) 00:12:19.414 fused_ordering(641) 00:12:19.414 fused_ordering(642) 00:12:19.414 fused_ordering(643) 00:12:19.414 fused_ordering(644) 00:12:19.414 fused_ordering(645) 00:12:19.414 fused_ordering(646) 00:12:19.414 fused_ordering(647) 00:12:19.414 fused_ordering(648) 00:12:19.414 fused_ordering(649) 00:12:19.414 fused_ordering(650) 00:12:19.414 fused_ordering(651) 00:12:19.414 fused_ordering(652) 00:12:19.414 fused_ordering(653) 00:12:19.414 fused_ordering(654) 00:12:19.414 fused_ordering(655) 00:12:19.414 fused_ordering(656) 00:12:19.414 fused_ordering(657) 00:12:19.414 fused_ordering(658) 00:12:19.414 fused_ordering(659) 00:12:19.414 fused_ordering(660) 00:12:19.414 fused_ordering(661) 00:12:19.414 fused_ordering(662) 00:12:19.414 fused_ordering(663) 00:12:19.414 fused_ordering(664) 00:12:19.414 fused_ordering(665) 00:12:19.414 fused_ordering(666) 00:12:19.414 fused_ordering(667) 00:12:19.414 fused_ordering(668) 00:12:19.414 fused_ordering(669) 00:12:19.414 fused_ordering(670) 00:12:19.414 fused_ordering(671) 00:12:19.414 fused_ordering(672) 00:12:19.414 fused_ordering(673) 00:12:19.414 fused_ordering(674) 00:12:19.414 fused_ordering(675) 00:12:19.414 fused_ordering(676) 00:12:19.414 fused_ordering(677) 00:12:19.414 fused_ordering(678) 00:12:19.414 fused_ordering(679) 00:12:19.414 fused_ordering(680) 00:12:19.414 fused_ordering(681) 00:12:19.414 fused_ordering(682) 00:12:19.414 fused_ordering(683) 00:12:19.414 fused_ordering(684) 00:12:19.414 fused_ordering(685) 00:12:19.414 fused_ordering(686) 00:12:19.414 fused_ordering(687) 00:12:19.414 fused_ordering(688) 00:12:19.414 fused_ordering(689) 00:12:19.414 fused_ordering(690) 00:12:19.414 fused_ordering(691) 00:12:19.414 fused_ordering(692) 00:12:19.414 fused_ordering(693) 00:12:19.414 fused_ordering(694) 00:12:19.414 fused_ordering(695) 00:12:19.414 fused_ordering(696) 00:12:19.414 fused_ordering(697) 00:12:19.414 fused_ordering(698) 00:12:19.414 fused_ordering(699) 00:12:19.414 fused_ordering(700) 00:12:19.414 fused_ordering(701) 00:12:19.414 fused_ordering(702) 00:12:19.414 fused_ordering(703) 00:12:19.414 fused_ordering(704) 00:12:19.414 fused_ordering(705) 00:12:19.414 fused_ordering(706) 00:12:19.414 fused_ordering(707) 00:12:19.414 fused_ordering(708) 00:12:19.414 fused_ordering(709) 00:12:19.414 fused_ordering(710) 00:12:19.414 fused_ordering(711) 00:12:19.414 fused_ordering(712) 00:12:19.414 fused_ordering(713) 00:12:19.414 fused_ordering(714) 00:12:19.414 fused_ordering(715) 00:12:19.414 fused_ordering(716) 00:12:19.414 fused_ordering(717) 00:12:19.414 fused_ordering(718) 00:12:19.414 fused_ordering(719) 00:12:19.414 fused_ordering(720) 00:12:19.414 fused_ordering(721) 00:12:19.414 fused_ordering(722) 00:12:19.414 fused_ordering(723) 00:12:19.414 fused_ordering(724) 00:12:19.414 fused_ordering(725) 00:12:19.414 fused_ordering(726) 00:12:19.414 fused_ordering(727) 00:12:19.414 fused_ordering(728) 00:12:19.414 fused_ordering(729) 00:12:19.415 fused_ordering(730) 00:12:19.415 fused_ordering(731) 00:12:19.415 fused_ordering(732) 00:12:19.415 fused_ordering(733) 00:12:19.415 fused_ordering(734) 00:12:19.415 fused_ordering(735) 00:12:19.415 fused_ordering(736) 00:12:19.415 fused_ordering(737) 00:12:19.415 fused_ordering(738) 00:12:19.415 fused_ordering(739) 00:12:19.415 fused_ordering(740) 00:12:19.415 fused_ordering(741) 00:12:19.415 fused_ordering(742) 00:12:19.415 fused_ordering(743) 00:12:19.415 fused_ordering(744) 00:12:19.415 fused_ordering(745) 00:12:19.415 fused_ordering(746) 00:12:19.415 fused_ordering(747) 00:12:19.415 fused_ordering(748) 00:12:19.415 fused_ordering(749) 00:12:19.415 fused_ordering(750) 00:12:19.415 fused_ordering(751) 00:12:19.415 fused_ordering(752) 00:12:19.415 fused_ordering(753) 00:12:19.415 fused_ordering(754) 00:12:19.415 fused_ordering(755) 00:12:19.415 fused_ordering(756) 00:12:19.415 fused_ordering(757) 00:12:19.415 fused_ordering(758) 00:12:19.415 fused_ordering(759) 00:12:19.415 fused_ordering(760) 00:12:19.415 fused_ordering(761) 00:12:19.415 fused_ordering(762) 00:12:19.415 fused_ordering(763) 00:12:19.415 fused_ordering(764) 00:12:19.415 fused_ordering(765) 00:12:19.415 fused_ordering(766) 00:12:19.415 fused_ordering(767) 00:12:19.415 fused_ordering(768) 00:12:19.415 fused_ordering(769) 00:12:19.415 fused_ordering(770) 00:12:19.415 fused_ordering(771) 00:12:19.415 fused_ordering(772) 00:12:19.415 fused_ordering(773) 00:12:19.415 fused_ordering(774) 00:12:19.415 fused_ordering(775) 00:12:19.415 fused_ordering(776) 00:12:19.415 fused_ordering(777) 00:12:19.415 fused_ordering(778) 00:12:19.415 fused_ordering(779) 00:12:19.415 fused_ordering(780) 00:12:19.415 fused_ordering(781) 00:12:19.415 fused_ordering(782) 00:12:19.415 fused_ordering(783) 00:12:19.415 fused_ordering(784) 00:12:19.415 fused_ordering(785) 00:12:19.415 fused_ordering(786) 00:12:19.415 fused_ordering(787) 00:12:19.415 fused_ordering(788) 00:12:19.415 fused_ordering(789) 00:12:19.415 fused_ordering(790) 00:12:19.415 fused_ordering(791) 00:12:19.415 fused_ordering(792) 00:12:19.415 fused_ordering(793) 00:12:19.415 fused_ordering(794) 00:12:19.415 fused_ordering(795) 00:12:19.415 fused_ordering(796) 00:12:19.415 fused_ordering(797) 00:12:19.415 fused_ordering(798) 00:12:19.415 fused_ordering(799) 00:12:19.415 fused_ordering(800) 00:12:19.415 fused_ordering(801) 00:12:19.415 fused_ordering(802) 00:12:19.415 fused_ordering(803) 00:12:19.415 fused_ordering(804) 00:12:19.415 fused_ordering(805) 00:12:19.415 fused_ordering(806) 00:12:19.415 fused_ordering(807) 00:12:19.415 fused_ordering(808) 00:12:19.415 fused_ordering(809) 00:12:19.415 fused_ordering(810) 00:12:19.415 fused_ordering(811) 00:12:19.415 fused_ordering(812) 00:12:19.415 fused_ordering(813) 00:12:19.415 fused_ordering(814) 00:12:19.415 fused_ordering(815) 00:12:19.415 fused_ordering(816) 00:12:19.415 fused_ordering(817) 00:12:19.415 fused_ordering(818) 00:12:19.415 fused_ordering(819) 00:12:19.415 fused_ordering(820) 00:12:19.673 fused_ordering(821) 00:12:19.673 fused_ordering(822) 00:12:19.673 fused_ordering(823) 00:12:19.673 fused_ordering(824) 00:12:19.673 fused_ordering(825) 00:12:19.673 fused_ordering(826) 00:12:19.673 fused_ordering(827) 00:12:19.673 fused_ordering(828) 00:12:19.673 fused_ordering(829) 00:12:19.673 fused_ordering(830) 00:12:19.673 fused_ordering(831) 00:12:19.673 fused_ordering(832) 00:12:19.674 fused_ordering(833) 00:12:19.674 fused_ordering(834) 00:12:19.674 fused_ordering(835) 00:12:19.674 fused_ordering(836) 00:12:19.674 fused_ordering(837) 00:12:19.674 fused_ordering(838) 00:12:19.674 fused_ordering(839) 00:12:19.674 fused_ordering(840) 00:12:19.674 fused_ordering(841) 00:12:19.674 fused_ordering(842) 00:12:19.674 fused_ordering(843) 00:12:19.674 fused_ordering(844) 00:12:19.674 fused_ordering(845) 00:12:19.674 fused_ordering(846) 00:12:19.674 fused_ordering(847) 00:12:19.674 fused_ordering(848) 00:12:19.674 fused_ordering(849) 00:12:19.674 fused_ordering(850) 00:12:19.674 fused_ordering(851) 00:12:19.674 fused_ordering(852) 00:12:19.674 fused_ordering(853) 00:12:19.674 fused_ordering(854) 00:12:19.674 fused_ordering(855) 00:12:19.674 fused_ordering(856) 00:12:19.674 fused_ordering(857) 00:12:19.674 fused_ordering(858) 00:12:19.674 fused_ordering(859) 00:12:19.674 fused_ordering(860) 00:12:19.674 fused_ordering(861) 00:12:19.674 fused_ordering(862) 00:12:19.674 fused_ordering(863) 00:12:19.674 fused_ordering(864) 00:12:19.674 fused_ordering(865) 00:12:19.674 fused_ordering(866) 00:12:19.674 fused_ordering(867) 00:12:19.674 fused_ordering(868) 00:12:19.674 fused_ordering(869) 00:12:19.674 fused_ordering(870) 00:12:19.674 fused_ordering(871) 00:12:19.674 fused_ordering(872) 00:12:19.674 fused_ordering(873) 00:12:19.674 fused_ordering(874) 00:12:19.674 fused_ordering(875) 00:12:19.674 fused_ordering(876) 00:12:19.674 fused_ordering(877) 00:12:19.674 fused_ordering(878) 00:12:19.674 fused_ordering(879) 00:12:19.674 fused_ordering(880) 00:12:19.674 fused_ordering(881) 00:12:19.674 fused_ordering(882) 00:12:19.674 fused_ordering(883) 00:12:19.674 fused_ordering(884) 00:12:19.674 fused_ordering(885) 00:12:19.674 fused_ordering(886) 00:12:19.674 fused_ordering(887) 00:12:19.674 fused_ordering(888) 00:12:19.674 fused_ordering(889) 00:12:19.674 fused_ordering(890) 00:12:19.674 fused_ordering(891) 00:12:19.674 fused_ordering(892) 00:12:19.674 fused_ordering(893) 00:12:19.674 fused_ordering(894) 00:12:19.674 fused_ordering(895) 00:12:19.674 fused_ordering(896) 00:12:19.674 fused_ordering(897) 00:12:19.674 fused_ordering(898) 00:12:19.674 fused_ordering(899) 00:12:19.674 fused_ordering(900) 00:12:19.674 fused_ordering(901) 00:12:19.674 fused_ordering(902) 00:12:19.674 fused_ordering(903) 00:12:19.674 fused_ordering(904) 00:12:19.674 fused_ordering(905) 00:12:19.674 fused_ordering(906) 00:12:19.674 fused_ordering(907) 00:12:19.674 fused_ordering(908) 00:12:19.674 fused_ordering(909) 00:12:19.674 fused_ordering(910) 00:12:19.674 fused_ordering(911) 00:12:19.674 fused_ordering(912) 00:12:19.674 fused_ordering(913) 00:12:19.674 fused_ordering(914) 00:12:19.674 fused_ordering(915) 00:12:19.674 fused_ordering(916) 00:12:19.674 fused_ordering(917) 00:12:19.674 fused_ordering(918) 00:12:19.674 fused_ordering(919) 00:12:19.674 fused_ordering(920) 00:12:19.674 fused_ordering(921) 00:12:19.674 fused_ordering(922) 00:12:19.674 fused_ordering(923) 00:12:19.674 fused_ordering(924) 00:12:19.674 fused_ordering(925) 00:12:19.674 fused_ordering(926) 00:12:19.674 fused_ordering(927) 00:12:19.674 fused_ordering(928) 00:12:19.674 fused_ordering(929) 00:12:19.674 fused_ordering(930) 00:12:19.674 fused_ordering(931) 00:12:19.674 fused_ordering(932) 00:12:19.674 fused_ordering(933) 00:12:19.674 fused_ordering(934) 00:12:19.674 fused_ordering(935) 00:12:19.674 fused_ordering(936) 00:12:19.674 fused_ordering(937) 00:12:19.674 fused_ordering(938) 00:12:19.674 fused_ordering(939) 00:12:19.674 fused_ordering(940) 00:12:19.674 fused_ordering(941) 00:12:19.674 fused_ordering(942) 00:12:19.674 fused_ordering(943) 00:12:19.674 fused_ordering(944) 00:12:19.674 fused_ordering(945) 00:12:19.674 fused_ordering(946) 00:12:19.674 fused_ordering(947) 00:12:19.674 fused_ordering(948) 00:12:19.674 fused_ordering(949) 00:12:19.674 fused_ordering(950) 00:12:19.674 fused_ordering(951) 00:12:19.674 fused_ordering(952) 00:12:19.674 fused_ordering(953) 00:12:19.674 fused_ordering(954) 00:12:19.674 fused_ordering(955) 00:12:19.674 fused_ordering(956) 00:12:19.674 fused_ordering(957) 00:12:19.674 fused_ordering(958) 00:12:19.674 fused_ordering(959) 00:12:19.674 fused_ordering(960) 00:12:19.674 fused_ordering(961) 00:12:19.674 fused_ordering(962) 00:12:19.674 fused_ordering(963) 00:12:19.674 fused_ordering(964) 00:12:19.674 fused_ordering(965) 00:12:19.674 fused_ordering(966) 00:12:19.674 fused_ordering(967) 00:12:19.674 fused_ordering(968) 00:12:19.674 fused_ordering(969) 00:12:19.674 fused_ordering(970) 00:12:19.674 fused_ordering(971) 00:12:19.674 fused_ordering(972) 00:12:19.674 fused_ordering(973) 00:12:19.674 fused_ordering(974) 00:12:19.674 fused_ordering(975) 00:12:19.674 fused_ordering(976) 00:12:19.674 fused_ordering(977) 00:12:19.674 fused_ordering(978) 00:12:19.674 fused_ordering(979) 00:12:19.674 fused_ordering(980) 00:12:19.674 fused_ordering(981) 00:12:19.674 fused_ordering(982) 00:12:19.674 fused_ordering(983) 00:12:19.674 fused_ordering(984) 00:12:19.674 fused_ordering(985) 00:12:19.674 fused_ordering(986) 00:12:19.674 fused_ordering(987) 00:12:19.674 fused_ordering(988) 00:12:19.674 fused_ordering(989) 00:12:19.674 fused_ordering(990) 00:12:19.674 fused_ordering(991) 00:12:19.674 fused_ordering(992) 00:12:19.674 fused_ordering(993) 00:12:19.674 fused_ordering(994) 00:12:19.674 fused_ordering(995) 00:12:19.674 fused_ordering(996) 00:12:19.674 fused_ordering(997) 00:12:19.674 fused_ordering(998) 00:12:19.674 fused_ordering(999) 00:12:19.674 fused_ordering(1000) 00:12:19.674 fused_ordering(1001) 00:12:19.674 fused_ordering(1002) 00:12:19.674 fused_ordering(1003) 00:12:19.674 fused_ordering(1004) 00:12:19.674 fused_ordering(1005) 00:12:19.674 fused_ordering(1006) 00:12:19.675 fused_ordering(1007) 00:12:19.675 fused_ordering(1008) 00:12:19.675 fused_ordering(1009) 00:12:19.675 fused_ordering(1010) 00:12:19.675 fused_ordering(1011) 00:12:19.675 fused_ordering(1012) 00:12:19.675 fused_ordering(1013) 00:12:19.675 fused_ordering(1014) 00:12:19.675 fused_ordering(1015) 00:12:19.675 fused_ordering(1016) 00:12:19.675 fused_ordering(1017) 00:12:19.675 fused_ordering(1018) 00:12:19.675 fused_ordering(1019) 00:12:19.675 fused_ordering(1020) 00:12:19.675 fused_ordering(1021) 00:12:19.675 fused_ordering(1022) 00:12:19.675 fused_ordering(1023) 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:19.675 rmmod nvme_rdma 00:12:19.675 rmmod nvme_fabrics 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 602042 ']' 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 602042 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 602042 ']' 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 602042 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.675 17:36:57 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 602042 00:12:19.675 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:19.675 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:19.675 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 602042' 00:12:19.675 killing process with pid 602042 00:12:19.675 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 602042 00:12:19.675 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 602042 00:12:19.933 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:19.933 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:19.933 00:12:19.933 real 0m8.187s 00:12:19.933 user 0m3.887s 00:12:19.933 sys 0m5.481s 00:12:19.933 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.933 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.933 ************************************ 00:12:19.933 END TEST nvmf_fused_ordering 00:12:19.933 ************************************ 00:12:19.933 17:36:58 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:12:19.933 17:36:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:19.933 17:36:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.933 17:36:58 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.933 ************************************ 00:12:19.933 START TEST nvmf_ns_masking 00:12:19.933 ************************************ 00:12:19.933 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:12:20.192 * Looking for test storage... 00:12:20.192 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:20.192 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:20.192 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:20.192 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:20.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.193 --rc genhtml_branch_coverage=1 00:12:20.193 --rc genhtml_function_coverage=1 00:12:20.193 --rc genhtml_legend=1 00:12:20.193 --rc geninfo_all_blocks=1 00:12:20.193 --rc geninfo_unexecuted_blocks=1 00:12:20.193 00:12:20.193 ' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:20.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.193 --rc genhtml_branch_coverage=1 00:12:20.193 --rc genhtml_function_coverage=1 00:12:20.193 --rc genhtml_legend=1 00:12:20.193 --rc geninfo_all_blocks=1 00:12:20.193 --rc geninfo_unexecuted_blocks=1 00:12:20.193 00:12:20.193 ' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:20.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.193 --rc genhtml_branch_coverage=1 00:12:20.193 --rc genhtml_function_coverage=1 00:12:20.193 --rc genhtml_legend=1 00:12:20.193 --rc geninfo_all_blocks=1 00:12:20.193 --rc geninfo_unexecuted_blocks=1 00:12:20.193 00:12:20.193 ' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:20.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.193 --rc genhtml_branch_coverage=1 00:12:20.193 --rc genhtml_function_coverage=1 00:12:20.193 --rc genhtml_legend=1 00:12:20.193 --rc geninfo_all_blocks=1 00:12:20.193 --rc geninfo_unexecuted_blocks=1 00:12:20.193 00:12:20.193 ' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.193 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:20.193 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=682546c1-e91a-4311-8a35-70900ea7a16c 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=114ad383-9a1f-4c8f-879a-a3bd41118f21 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=993dc982-635d-4932-abe4-a1a86f2ef7e8 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.194 17:36:58 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:12:26.751 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:12:26.751 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:26.751 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:26.752 Found net devices under 0000:18:00.0: mlx_0_0 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:26.752 Found net devices under 0000:18:00.1: mlx_0_1 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # rdma_device_init 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # uname 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:26.752 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:27.010 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.010 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:12:27.010 altname enp24s0f0np0 00:12:27.010 altname ens785f0np0 00:12:27.010 inet 192.168.100.8/24 scope global mlx_0_0 00:12:27.010 valid_lft forever preferred_lft forever 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:27.010 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:27.011 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:27.011 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:12:27.011 altname enp24s0f1np1 00:12:27.011 altname ens785f1np1 00:12:27.011 inet 192.168.100.9/24 scope global mlx_0_1 00:12:27.011 valid_lft forever preferred_lft forever 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@109 -- # continue 2 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:27.011 192.168.100.9' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:27.011 192.168.100.9' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # head -n 1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:27.011 192.168.100.9' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # tail -n +2 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # head -n 1 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=605300 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 605300 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 605300 ']' 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:27.011 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:27.011 [2024-10-17 17:37:05.355795] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:12:27.011 [2024-10-17 17:37:05.355857] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.269 [2024-10-17 17:37:05.429687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.269 [2024-10-17 17:37:05.473993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.269 [2024-10-17 17:37:05.474040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.269 [2024-10-17 17:37:05.474049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.269 [2024-10-17 17:37:05.474058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.269 [2024-10-17 17:37:05.474065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.269 [2024-10-17 17:37:05.474529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.269 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.269 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:12:27.269 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:27.269 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.269 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:27.269 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.269 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:27.527 [2024-10-17 17:37:05.818421] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x247d080/0x2481570) succeed. 00:12:27.527 [2024-10-17 17:37:05.827645] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x247e530/0x24c2c10) succeed. 00:12:27.527 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:27.527 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:27.527 17:37:05 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:27.785 Malloc1 00:12:27.785 17:37:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:28.042 Malloc2 00:12:28.042 17:37:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.299 17:37:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:28.299 17:37:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:28.556 [2024-10-17 17:37:06.835210] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:28.556 17:37:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:28.556 17:37:06 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 993dc982-635d-4932-abe4-a1a86f2ef7e8 -a 192.168.100.8 -s 4420 -i 4 00:12:29.119 17:37:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:29.119 17:37:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:29.119 17:37:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:29.119 17:37:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:29.119 17:37:07 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.014 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.272 [ 0]:0x1 00:12:31.272 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.272 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.272 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1dfd2775eb07452aba7e269693635768 00:12:31.272 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1dfd2775eb07452aba7e269693635768 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.272 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:31.272 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:31.272 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.272 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:31.530 [ 0]:0x1 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1dfd2775eb07452aba7e269693635768 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1dfd2775eb07452aba7e269693635768 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.530 [ 1]:0x2 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c00a64289f74af8a25dc9c57f399399 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c00a64289f74af8a25dc9c57f399399 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:31.530 17:37:09 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:32.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.463 17:37:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.463 17:37:10 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:32.720 17:37:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:32.720 17:37:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 993dc982-635d-4932-abe4-a1a86f2ef7e8 -a 192.168.100.8 -s 4420 -i 4 00:12:33.284 17:37:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:33.284 17:37:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.284 17:37:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.284 17:37:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:33.284 17:37:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:33.284 17:37:11 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.182 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.439 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.439 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.439 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.440 [ 0]:0x2 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c00a64289f74af8a25dc9c57f399399 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c00a64289f74af8a25dc9c57f399399 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.440 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.697 [ 0]:0x1 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1dfd2775eb07452aba7e269693635768 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1dfd2775eb07452aba7e269693635768 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.697 [ 1]:0x2 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c00a64289f74af8a25dc9c57f399399 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c00a64289f74af8a25dc9c57f399399 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.697 17:37:13 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.955 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.956 [ 0]:0x2 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c00a64289f74af8a25dc9c57f399399 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c00a64289f74af8a25dc9c57f399399 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:35.956 17:37:14 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.887 17:37:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:37.144 17:37:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:37.144 17:37:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 993dc982-635d-4932-abe4-a1a86f2ef7e8 -a 192.168.100.8 -s 4420 -i 4 00:12:37.711 17:37:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:37.711 17:37:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:37.711 17:37:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.711 17:37:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:37.711 17:37:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:37.711 17:37:15 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.611 [ 0]:0x1 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1dfd2775eb07452aba7e269693635768 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1dfd2775eb07452aba7e269693635768 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.611 [ 1]:0x2 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c00a64289f74af8a25dc9c57f399399 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c00a64289f74af8a25dc9c57f399399 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.611 17:37:17 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.870 [ 0]:0x2 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.870 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c00a64289f74af8a25dc9c57f399399 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c00a64289f74af8a25dc9c57f399399 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:40.130 [2024-10-17 17:37:18.439943] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:40.130 request: 00:12:40.130 { 00:12:40.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:40.130 "nsid": 2, 00:12:40.130 "host": "nqn.2016-06.io.spdk:host1", 00:12:40.130 "method": "nvmf_ns_remove_host", 00:12:40.130 "req_id": 1 00:12:40.130 } 00:12:40.130 Got JSON-RPC error response 00:12:40.130 response: 00:12:40.130 { 00:12:40.130 "code": -32602, 00:12:40.130 "message": "Invalid parameters" 00:12:40.130 } 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.130 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:40.389 [ 0]:0x2 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5c00a64289f74af8a25dc9c57f399399 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5c00a64289f74af8a25dc9c57f399399 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:40.389 17:37:18 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.324 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=607331 00:12:41.324 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:41.324 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.324 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 607331 /var/tmp/host.sock 00:12:41.325 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 607331 ']' 00:12:41.325 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:12:41.325 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:41.325 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:41.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:41.325 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:41.325 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:41.325 [2024-10-17 17:37:19.467688] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:12:41.325 [2024-10-17 17:37:19.467751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid607331 ] 00:12:41.325 [2024-10-17 17:37:19.541293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.325 [2024-10-17 17:37:19.586624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:41.583 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:41.583 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:12:41.583 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.842 17:37:19 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:41.842 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 682546c1-e91a-4311-8a35-70900ea7a16c 00:12:41.842 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:12:41.842 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 682546C1E91A43118A3570900EA7A16C -i 00:12:42.101 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 114ad383-9a1f-4c8f-879a-a3bd41118f21 00:12:42.101 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:12:42.101 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 114AD3839A1F4C8F879AA3BD41118F21 -i 00:12:42.359 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:42.617 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:42.617 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:42.617 17:37:20 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:42.875 nvme0n1 00:12:43.133 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:43.134 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:43.134 nvme1n2 00:12:43.134 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:43.134 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:43.134 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:43.134 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:43.134 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:43.392 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:43.392 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:43.392 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:43.392 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:43.651 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 682546c1-e91a-4311-8a35-70900ea7a16c == \6\8\2\5\4\6\c\1\-\e\9\1\a\-\4\3\1\1\-\8\a\3\5\-\7\0\9\0\0\e\a\7\a\1\6\c ]] 00:12:43.651 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:43.651 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:43.651 17:37:21 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 114ad383-9a1f-4c8f-879a-a3bd41118f21 == \1\1\4\a\d\3\8\3\-\9\a\1\f\-\4\c\8\f\-\8\7\9\a\-\a\3\b\d\4\1\1\1\8\f\2\1 ]] 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 607331 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 607331 ']' 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 607331 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 607331 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 607331' 00:12:43.910 killing process with pid 607331 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 607331 00:12:43.910 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 607331 00:12:44.168 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:44.427 rmmod nvme_rdma 00:12:44.427 rmmod nvme_fabrics 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 605300 ']' 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 605300 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 605300 ']' 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 605300 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:44.427 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 605300 00:12:44.686 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.686 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.686 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 605300' 00:12:44.686 killing process with pid 605300 00:12:44.686 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 605300 00:12:44.686 17:37:22 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 605300 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:44.944 00:12:44.944 real 0m24.760s 00:12:44.944 user 0m27.557s 00:12:44.944 sys 0m7.599s 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:44.944 ************************************ 00:12:44.944 END TEST nvmf_ns_masking 00:12:44.944 ************************************ 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:44.944 ************************************ 00:12:44.944 START TEST nvmf_nvme_cli 00:12:44.944 ************************************ 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:12:44.944 * Looking for test storage... 00:12:44.944 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:44.944 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:45.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.203 --rc genhtml_branch_coverage=1 00:12:45.203 --rc genhtml_function_coverage=1 00:12:45.203 --rc genhtml_legend=1 00:12:45.203 --rc geninfo_all_blocks=1 00:12:45.203 --rc geninfo_unexecuted_blocks=1 00:12:45.203 00:12:45.203 ' 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:45.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.203 --rc genhtml_branch_coverage=1 00:12:45.203 --rc genhtml_function_coverage=1 00:12:45.203 --rc genhtml_legend=1 00:12:45.203 --rc geninfo_all_blocks=1 00:12:45.203 --rc geninfo_unexecuted_blocks=1 00:12:45.203 00:12:45.203 ' 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:45.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.203 --rc genhtml_branch_coverage=1 00:12:45.203 --rc genhtml_function_coverage=1 00:12:45.203 --rc genhtml_legend=1 00:12:45.203 --rc geninfo_all_blocks=1 00:12:45.203 --rc geninfo_unexecuted_blocks=1 00:12:45.203 00:12:45.203 ' 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:45.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:45.203 --rc genhtml_branch_coverage=1 00:12:45.203 --rc genhtml_function_coverage=1 00:12:45.203 --rc genhtml_legend=1 00:12:45.203 --rc geninfo_all_blocks=1 00:12:45.203 --rc geninfo_unexecuted_blocks=1 00:12:45.203 00:12:45.203 ' 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.203 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:45.204 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:45.204 17:37:23 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:12:51.771 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:12:51.771 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:12:51.771 Found net devices under 0000:18:00.0: mlx_0_0 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:12:51.771 Found net devices under 0000:18:00.1: mlx_0_1 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # rdma_device_init 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # uname 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe ib_cm 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe ib_core 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe ib_umad 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@70 -- # modprobe iw_cm 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # allocate_nic_ips 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # get_rdma_if_list 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:51.771 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:12:51.772 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.772 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:12:51.772 altname enp24s0f0np0 00:12:51.772 altname ens785f0np0 00:12:51.772 inet 192.168.100.8/24 scope global mlx_0_0 00:12:51.772 valid_lft forever preferred_lft forever 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:12:51.772 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:51.772 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:12:51.772 altname enp24s0f1np1 00:12:51.772 altname ens785f1np1 00:12:51.772 inet 192.168.100.9/24 scope global mlx_0_1 00:12:51.772 valid_lft forever preferred_lft forever 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # get_rdma_if_list 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@108 -- # echo mlx_0_1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@109 -- # continue 2 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # awk '{print $4}' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # cut -d/ -f1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:12:51.772 192.168.100.9' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:12:51.772 192.168.100.9' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # head -n 1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:12:51.772 192.168.100.9' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # tail -n +2 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # head -n 1 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=610673 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 610673 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 610673 ']' 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:51.772 [2024-10-17 17:37:29.754104] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:12:51.772 [2024-10-17 17:37:29.754164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.772 [2024-10-17 17:37:29.827390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.772 [2024-10-17 17:37:29.874926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.772 [2024-10-17 17:37:29.874974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.772 [2024-10-17 17:37:29.874983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.772 [2024-10-17 17:37:29.874991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.772 [2024-10-17 17:37:29.874998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.772 [2024-10-17 17:37:29.876414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.772 [2024-10-17 17:37:29.876509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.772 [2024-10-17 17:37:29.876535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.772 [2024-10-17 17:37:29.876537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.772 17:37:29 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:51.772 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.772 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:51.772 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.772 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:51.772 [2024-10-17 17:37:30.063174] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb642c0/0xb687b0) succeed. 00:12:51.772 [2024-10-17 17:37:30.073691] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb65950/0xba9e50) succeed. 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 Malloc0 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 Malloc1 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 [2024-10-17 17:37:30.304515] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -a 192.168.100.8 -s 4420 00:12:52.031 00:12:52.031 Discovery Log Number of Records 2, Generation counter 2 00:12:52.031 =====Discovery Log Entry 0====== 00:12:52.031 trtype: rdma 00:12:52.031 adrfam: ipv4 00:12:52.031 subtype: current discovery subsystem 00:12:52.031 treq: not required 00:12:52.031 portid: 0 00:12:52.031 trsvcid: 4420 00:12:52.031 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:52.031 traddr: 192.168.100.8 00:12:52.031 eflags: explicit discovery connections, duplicate discovery information 00:12:52.031 rdma_prtype: not specified 00:12:52.031 rdma_qptype: connected 00:12:52.031 rdma_cms: rdma-cm 00:12:52.031 rdma_pkey: 0x0000 00:12:52.031 =====Discovery Log Entry 1====== 00:12:52.031 trtype: rdma 00:12:52.031 adrfam: ipv4 00:12:52.031 subtype: nvme subsystem 00:12:52.031 treq: not required 00:12:52.031 portid: 0 00:12:52.031 trsvcid: 4420 00:12:52.031 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:52.031 traddr: 192.168.100.8 00:12:52.031 eflags: none 00:12:52.031 rdma_prtype: not specified 00:12:52.031 rdma_qptype: connected 00:12:52.031 rdma_cms: rdma-cm 00:12:52.031 rdma_pkey: 0x0000 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:12:52.031 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:52.290 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:52.290 17:37:30 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:53.665 17:37:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:53.665 17:37:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.665 17:37:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.665 17:37:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:53.665 17:37:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:53.665 17:37:31 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:56.194 17:37:33 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:56.194 /dev/nvme0n2 ]] 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:56.194 17:37:34 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:12:59.476 rmmod nvme_rdma 00:12:59.476 rmmod nvme_fabrics 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 610673 ']' 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 610673 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 610673 ']' 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 610673 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 610673 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 610673' 00:12:59.476 killing process with pid 610673 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 610673 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 610673 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:12:59.476 00:12:59.476 real 0m14.583s 00:12:59.476 user 0m33.016s 00:12:59.476 sys 0m5.536s 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:59.476 ************************************ 00:12:59.476 END TEST nvmf_nvme_cli 00:12:59.476 ************************************ 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.476 17:37:37 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.476 ************************************ 00:12:59.476 START TEST nvmf_auth_target 00:12:59.476 ************************************ 00:12:59.477 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:12:59.736 * Looking for test storage... 00:12:59.736 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.736 --rc genhtml_branch_coverage=1 00:12:59.736 --rc genhtml_function_coverage=1 00:12:59.736 --rc genhtml_legend=1 00:12:59.736 --rc geninfo_all_blocks=1 00:12:59.736 --rc geninfo_unexecuted_blocks=1 00:12:59.736 00:12:59.736 ' 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.736 --rc genhtml_branch_coverage=1 00:12:59.736 --rc genhtml_function_coverage=1 00:12:59.736 --rc genhtml_legend=1 00:12:59.736 --rc geninfo_all_blocks=1 00:12:59.736 --rc geninfo_unexecuted_blocks=1 00:12:59.736 00:12:59.736 ' 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.736 --rc genhtml_branch_coverage=1 00:12:59.736 --rc genhtml_function_coverage=1 00:12:59.736 --rc genhtml_legend=1 00:12:59.736 --rc geninfo_all_blocks=1 00:12:59.736 --rc geninfo_unexecuted_blocks=1 00:12:59.736 00:12:59.736 ' 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:59.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.736 --rc genhtml_branch_coverage=1 00:12:59.736 --rc genhtml_function_coverage=1 00:12:59.736 --rc genhtml_legend=1 00:12:59.736 --rc geninfo_all_blocks=1 00:12:59.736 --rc geninfo_unexecuted_blocks=1 00:12:59.736 00:12:59.736 ' 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.736 17:37:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.736 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:12:59.736 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:12:59.736 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.736 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.737 17:37:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:13:06.305 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:06.305 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:13:06.306 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:13:06.306 Found net devices under 0000:18:00.0: mlx_0_0 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:13:06.306 Found net devices under 0000:18:00.1: mlx_0_1 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # rdma_device_init 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # uname 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.306 17:37:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:13:06.306 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:06.306 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:13:06.306 altname enp24s0f0np0 00:13:06.306 altname ens785f0np0 00:13:06.306 inet 192.168.100.8/24 scope global mlx_0_0 00:13:06.306 valid_lft forever preferred_lft forever 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:13:06.306 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:06.306 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:13:06.306 altname enp24s0f1np1 00:13:06.306 altname ens785f1np1 00:13:06.306 inet 192.168.100.9/24 scope global mlx_0_1 00:13:06.306 valid_lft forever preferred_lft forever 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@109 -- # continue 2 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.306 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:13:06.307 192.168.100.9' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:13:06.307 192.168.100.9' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # head -n 1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # head -n 1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:13:06.307 192.168.100.9' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # tail -n +2 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=614617 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 614617 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 614617 ']' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=614773 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=86c4cb8c728cda9506cde41349a490f8b207abb6d63ad27f 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.Nr9 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 86c4cb8c728cda9506cde41349a490f8b207abb6d63ad27f 0 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 86c4cb8c728cda9506cde41349a490f8b207abb6d63ad27f 0 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=86c4cb8c728cda9506cde41349a490f8b207abb6d63ad27f 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.Nr9 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.Nr9 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Nr9 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=2947d9614bbf8504ad2d443f4bf94c9358d3640967c4c7e58e6feef01f8cbbeb 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.Unh 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 2947d9614bbf8504ad2d443f4bf94c9358d3640967c4c7e58e6feef01f8cbbeb 3 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 2947d9614bbf8504ad2d443f4bf94c9358d3640967c4c7e58e6feef01f8cbbeb 3 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=2947d9614bbf8504ad2d443f4bf94c9358d3640967c4c7e58e6feef01f8cbbeb 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.Unh 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.Unh 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Unh 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=bbb71be17022befe7788f29148d03441 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.4v9 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key bbb71be17022befe7788f29148d03441 1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 bbb71be17022befe7788f29148d03441 1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=bbb71be17022befe7788f29148d03441 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.4v9 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.4v9 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.4v9 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=0adcdfbdebe10c8b0005a40b0e561f34bf426d2c33ce7fe4 00:13:06.307 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:13:06.308 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Ba4 00:13:06.308 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 0adcdfbdebe10c8b0005a40b0e561f34bf426d2c33ce7fe4 2 00:13:06.308 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 0adcdfbdebe10c8b0005a40b0e561f34bf426d2c33ce7fe4 2 00:13:06.308 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:13:06.308 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:13:06.308 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=0adcdfbdebe10c8b0005a40b0e561f34bf426d2c33ce7fe4 00:13:06.308 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:13:06.308 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Ba4 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Ba4 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.Ba4 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4c772d3f33e7576653113c83cbbbaea9106f213998277dd5 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.HM6 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4c772d3f33e7576653113c83cbbbaea9106f213998277dd5 2 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4c772d3f33e7576653113c83cbbbaea9106f213998277dd5 2 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4c772d3f33e7576653113c83cbbbaea9106f213998277dd5 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.HM6 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.HM6 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.HM6 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=9086e2de19ea0c6369187313d787c53f 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.MXR 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 9086e2de19ea0c6369187313d787c53f 1 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 9086e2de19ea0c6369187313d787c53f 1 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=9086e2de19ea0c6369187313d787c53f 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.MXR 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.MXR 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.MXR 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=554827412d081de118bd3d80a2265633231069c09f569e546643cb6a043ff60d 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.PD3 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 554827412d081de118bd3d80a2265633231069c09f569e546643cb6a043ff60d 3 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 554827412d081de118bd3d80a2265633231069c09f569e546643cb6a043ff60d 3 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=554827412d081de118bd3d80a2265633231069c09f569e546643cb6a043ff60d 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.PD3 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.PD3 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.PD3 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 614617 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 614617 ']' 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.567 17:37:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.826 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.826 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:06.826 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 614773 /var/tmp/host.sock 00:13:06.826 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 614773 ']' 00:13:06.826 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:06.826 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:06.826 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:06.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:06.826 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:06.826 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Nr9 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.084 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Nr9 00:13:07.085 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Nr9 00:13:07.343 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Unh ]] 00:13:07.343 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Unh 00:13:07.343 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.343 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.343 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.343 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Unh 00:13:07.343 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Unh 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4v9 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.4v9 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.4v9 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.Ba4 ]] 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ba4 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.602 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.861 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ba4 00:13:07.861 17:37:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ba4 00:13:07.861 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:07.861 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HM6 00:13:07.861 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.861 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.861 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.861 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.HM6 00:13:07.861 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.HM6 00:13:08.120 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.MXR ]] 00:13:08.120 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MXR 00:13:08.120 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.120 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.120 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.120 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MXR 00:13:08.120 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MXR 00:13:08.378 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:08.378 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PD3 00:13:08.378 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.378 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.378 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.378 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.PD3 00:13:08.378 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.PD3 00:13:08.636 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:08.636 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:08.636 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:08.637 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.637 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:08.637 17:37:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:08.896 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:09.207 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.207 { 00:13:09.207 "cntlid": 1, 00:13:09.207 "qid": 0, 00:13:09.207 "state": "enabled", 00:13:09.207 "thread": "nvmf_tgt_poll_group_000", 00:13:09.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:09.207 "listen_address": { 00:13:09.207 "trtype": "RDMA", 00:13:09.207 "adrfam": "IPv4", 00:13:09.207 "traddr": "192.168.100.8", 00:13:09.207 "trsvcid": "4420" 00:13:09.207 }, 00:13:09.207 "peer_address": { 00:13:09.207 "trtype": "RDMA", 00:13:09.207 "adrfam": "IPv4", 00:13:09.207 "traddr": "192.168.100.8", 00:13:09.207 "trsvcid": "48910" 00:13:09.207 }, 00:13:09.207 "auth": { 00:13:09.207 "state": "completed", 00:13:09.207 "digest": "sha256", 00:13:09.207 "dhgroup": "null" 00:13:09.207 } 00:13:09.207 } 00:13:09.207 ]' 00:13:09.207 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.509 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:09.509 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.509 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:09.509 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.509 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.509 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.509 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.509 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:09.509 17:37:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:10.444 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.444 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:10.444 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.444 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.444 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.444 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.444 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:10.444 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.702 17:37:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:10.960 00:13:10.960 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.960 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.960 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.218 { 00:13:11.218 "cntlid": 3, 00:13:11.218 "qid": 0, 00:13:11.218 "state": "enabled", 00:13:11.218 "thread": "nvmf_tgt_poll_group_000", 00:13:11.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:11.218 "listen_address": { 00:13:11.218 "trtype": "RDMA", 00:13:11.218 "adrfam": "IPv4", 00:13:11.218 "traddr": "192.168.100.8", 00:13:11.218 "trsvcid": "4420" 00:13:11.218 }, 00:13:11.218 "peer_address": { 00:13:11.218 "trtype": "RDMA", 00:13:11.218 "adrfam": "IPv4", 00:13:11.218 "traddr": "192.168.100.8", 00:13:11.218 "trsvcid": "59343" 00:13:11.218 }, 00:13:11.218 "auth": { 00:13:11.218 "state": "completed", 00:13:11.218 "digest": "sha256", 00:13:11.218 "dhgroup": "null" 00:13:11.218 } 00:13:11.218 } 00:13:11.218 ]' 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.218 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.476 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:11.476 17:37:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:12.410 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.410 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:12.410 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.410 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.410 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.410 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.410 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:12.410 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.669 17:37:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:12.927 00:13:12.927 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.927 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.927 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.186 { 00:13:13.186 "cntlid": 5, 00:13:13.186 "qid": 0, 00:13:13.186 "state": "enabled", 00:13:13.186 "thread": "nvmf_tgt_poll_group_000", 00:13:13.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:13.186 "listen_address": { 00:13:13.186 "trtype": "RDMA", 00:13:13.186 "adrfam": "IPv4", 00:13:13.186 "traddr": "192.168.100.8", 00:13:13.186 "trsvcid": "4420" 00:13:13.186 }, 00:13:13.186 "peer_address": { 00:13:13.186 "trtype": "RDMA", 00:13:13.186 "adrfam": "IPv4", 00:13:13.186 "traddr": "192.168.100.8", 00:13:13.186 "trsvcid": "45581" 00:13:13.186 }, 00:13:13.186 "auth": { 00:13:13.186 "state": "completed", 00:13:13.186 "digest": "sha256", 00:13:13.186 "dhgroup": "null" 00:13:13.186 } 00:13:13.186 } 00:13:13.186 ]' 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.186 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.444 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:13.444 17:37:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:14.376 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:14.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:14.376 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:14.376 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.376 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.376 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.376 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:14.376 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:14.376 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.633 17:37:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:14.891 00:13:14.891 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.891 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.891 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.148 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.148 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.148 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.148 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.148 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.149 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.149 { 00:13:15.149 "cntlid": 7, 00:13:15.149 "qid": 0, 00:13:15.149 "state": "enabled", 00:13:15.149 "thread": "nvmf_tgt_poll_group_000", 00:13:15.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:15.149 "listen_address": { 00:13:15.149 "trtype": "RDMA", 00:13:15.149 "adrfam": "IPv4", 00:13:15.149 "traddr": "192.168.100.8", 00:13:15.149 "trsvcid": "4420" 00:13:15.149 }, 00:13:15.149 "peer_address": { 00:13:15.149 "trtype": "RDMA", 00:13:15.149 "adrfam": "IPv4", 00:13:15.149 "traddr": "192.168.100.8", 00:13:15.149 "trsvcid": "35014" 00:13:15.149 }, 00:13:15.149 "auth": { 00:13:15.149 "state": "completed", 00:13:15.149 "digest": "sha256", 00:13:15.149 "dhgroup": "null" 00:13:15.149 } 00:13:15.149 } 00:13:15.149 ]' 00:13:15.149 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:15.149 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:15.149 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:15.149 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:15.149 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:15.149 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:15.149 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:15.149 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.407 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:15.407 17:37:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:15.972 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.230 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:16.230 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.231 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.231 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.231 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:16.231 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.231 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:16.231 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:16.488 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:16.488 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.488 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:16.488 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:16.488 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:16.488 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.488 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.488 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.488 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.489 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.489 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.489 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.489 17:37:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:16.746 00:13:16.746 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.746 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.746 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:17.005 { 00:13:17.005 "cntlid": 9, 00:13:17.005 "qid": 0, 00:13:17.005 "state": "enabled", 00:13:17.005 "thread": "nvmf_tgt_poll_group_000", 00:13:17.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:17.005 "listen_address": { 00:13:17.005 "trtype": "RDMA", 00:13:17.005 "adrfam": "IPv4", 00:13:17.005 "traddr": "192.168.100.8", 00:13:17.005 "trsvcid": "4420" 00:13:17.005 }, 00:13:17.005 "peer_address": { 00:13:17.005 "trtype": "RDMA", 00:13:17.005 "adrfam": "IPv4", 00:13:17.005 "traddr": "192.168.100.8", 00:13:17.005 "trsvcid": "58057" 00:13:17.005 }, 00:13:17.005 "auth": { 00:13:17.005 "state": "completed", 00:13:17.005 "digest": "sha256", 00:13:17.005 "dhgroup": "ffdhe2048" 00:13:17.005 } 00:13:17.005 } 00:13:17.005 ]' 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:17.005 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.263 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:17.263 17:37:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:17.829 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:18.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:18.087 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:18.087 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.087 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.087 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.087 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:18.087 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:18.087 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.345 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.603 00:13:18.603 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.603 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.603 17:37:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.862 { 00:13:18.862 "cntlid": 11, 00:13:18.862 "qid": 0, 00:13:18.862 "state": "enabled", 00:13:18.862 "thread": "nvmf_tgt_poll_group_000", 00:13:18.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:18.862 "listen_address": { 00:13:18.862 "trtype": "RDMA", 00:13:18.862 "adrfam": "IPv4", 00:13:18.862 "traddr": "192.168.100.8", 00:13:18.862 "trsvcid": "4420" 00:13:18.862 }, 00:13:18.862 "peer_address": { 00:13:18.862 "trtype": "RDMA", 00:13:18.862 "adrfam": "IPv4", 00:13:18.862 "traddr": "192.168.100.8", 00:13:18.862 "trsvcid": "56590" 00:13:18.862 }, 00:13:18.862 "auth": { 00:13:18.862 "state": "completed", 00:13:18.862 "digest": "sha256", 00:13:18.862 "dhgroup": "ffdhe2048" 00:13:18.862 } 00:13:18.862 } 00:13:18.862 ]' 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.862 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:19.119 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:19.119 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:19.119 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:19.119 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.119 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:19.119 17:37:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:20.052 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.052 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:20.052 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.052 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.052 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.052 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.052 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:20.052 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.310 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.568 00:13:20.568 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.568 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.568 17:37:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.826 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.826 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.826 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.826 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.826 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.826 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.826 { 00:13:20.826 "cntlid": 13, 00:13:20.826 "qid": 0, 00:13:20.827 "state": "enabled", 00:13:20.827 "thread": "nvmf_tgt_poll_group_000", 00:13:20.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:20.827 "listen_address": { 00:13:20.827 "trtype": "RDMA", 00:13:20.827 "adrfam": "IPv4", 00:13:20.827 "traddr": "192.168.100.8", 00:13:20.827 "trsvcid": "4420" 00:13:20.827 }, 00:13:20.827 "peer_address": { 00:13:20.827 "trtype": "RDMA", 00:13:20.827 "adrfam": "IPv4", 00:13:20.827 "traddr": "192.168.100.8", 00:13:20.827 "trsvcid": "50240" 00:13:20.827 }, 00:13:20.827 "auth": { 00:13:20.827 "state": "completed", 00:13:20.827 "digest": "sha256", 00:13:20.827 "dhgroup": "ffdhe2048" 00:13:20.827 } 00:13:20.827 } 00:13:20.827 ]' 00:13:20.827 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.827 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.827 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.827 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:20.827 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.827 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.827 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.827 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.085 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:21.085 17:37:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:21.652 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.910 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:21.910 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.910 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.910 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.910 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.910 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:21.910 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.168 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.426 00:13:22.426 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.426 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.426 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.685 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.685 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.685 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.685 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.685 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.685 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.685 { 00:13:22.685 "cntlid": 15, 00:13:22.685 "qid": 0, 00:13:22.685 "state": "enabled", 00:13:22.685 "thread": "nvmf_tgt_poll_group_000", 00:13:22.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:22.685 "listen_address": { 00:13:22.685 "trtype": "RDMA", 00:13:22.685 "adrfam": "IPv4", 00:13:22.685 "traddr": "192.168.100.8", 00:13:22.685 "trsvcid": "4420" 00:13:22.685 }, 00:13:22.685 "peer_address": { 00:13:22.685 "trtype": "RDMA", 00:13:22.685 "adrfam": "IPv4", 00:13:22.685 "traddr": "192.168.100.8", 00:13:22.685 "trsvcid": "57311" 00:13:22.685 }, 00:13:22.685 "auth": { 00:13:22.685 "state": "completed", 00:13:22.685 "digest": "sha256", 00:13:22.685 "dhgroup": "ffdhe2048" 00:13:22.685 } 00:13:22.685 } 00:13:22.685 ]' 00:13:22.685 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.685 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:22.685 17:38:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.685 17:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:22.685 17:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.685 17:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.685 17:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.685 17:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.943 17:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:22.943 17:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:23.877 17:38:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.877 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:23.877 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.877 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.877 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.877 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:23.877 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.877 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:23.877 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.135 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.392 00:13:24.392 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.392 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.392 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.650 { 00:13:24.650 "cntlid": 17, 00:13:24.650 "qid": 0, 00:13:24.650 "state": "enabled", 00:13:24.650 "thread": "nvmf_tgt_poll_group_000", 00:13:24.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:24.650 "listen_address": { 00:13:24.650 "trtype": "RDMA", 00:13:24.650 "adrfam": "IPv4", 00:13:24.650 "traddr": "192.168.100.8", 00:13:24.650 "trsvcid": "4420" 00:13:24.650 }, 00:13:24.650 "peer_address": { 00:13:24.650 "trtype": "RDMA", 00:13:24.650 "adrfam": "IPv4", 00:13:24.650 "traddr": "192.168.100.8", 00:13:24.650 "trsvcid": "39093" 00:13:24.650 }, 00:13:24.650 "auth": { 00:13:24.650 "state": "completed", 00:13:24.650 "digest": "sha256", 00:13:24.650 "dhgroup": "ffdhe3072" 00:13:24.650 } 00:13:24.650 } 00:13:24.650 ]' 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:24.650 17:38:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.650 17:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.650 17:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.650 17:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:24.907 17:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:24.907 17:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:25.839 17:38:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.839 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:25.839 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.839 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.839 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.839 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.839 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:25.839 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.097 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.354 00:13:26.354 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.354 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.354 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.612 { 00:13:26.612 "cntlid": 19, 00:13:26.612 "qid": 0, 00:13:26.612 "state": "enabled", 00:13:26.612 "thread": "nvmf_tgt_poll_group_000", 00:13:26.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:26.612 "listen_address": { 00:13:26.612 "trtype": "RDMA", 00:13:26.612 "adrfam": "IPv4", 00:13:26.612 "traddr": "192.168.100.8", 00:13:26.612 "trsvcid": "4420" 00:13:26.612 }, 00:13:26.612 "peer_address": { 00:13:26.612 "trtype": "RDMA", 00:13:26.612 "adrfam": "IPv4", 00:13:26.612 "traddr": "192.168.100.8", 00:13:26.612 "trsvcid": "57881" 00:13:26.612 }, 00:13:26.612 "auth": { 00:13:26.612 "state": "completed", 00:13:26.612 "digest": "sha256", 00:13:26.612 "dhgroup": "ffdhe3072" 00:13:26.612 } 00:13:26.612 } 00:13:26.612 ]' 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.612 17:38:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.870 17:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:26.870 17:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:27.434 17:38:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.692 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:27.692 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.692 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.692 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.692 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.692 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:27.692 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.950 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.207 00:13:28.207 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.207 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.207 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.464 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.464 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.464 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.464 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.464 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.464 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.464 { 00:13:28.464 "cntlid": 21, 00:13:28.464 "qid": 0, 00:13:28.464 "state": "enabled", 00:13:28.464 "thread": "nvmf_tgt_poll_group_000", 00:13:28.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:28.464 "listen_address": { 00:13:28.464 "trtype": "RDMA", 00:13:28.464 "adrfam": "IPv4", 00:13:28.464 "traddr": "192.168.100.8", 00:13:28.464 "trsvcid": "4420" 00:13:28.464 }, 00:13:28.464 "peer_address": { 00:13:28.464 "trtype": "RDMA", 00:13:28.464 "adrfam": "IPv4", 00:13:28.464 "traddr": "192.168.100.8", 00:13:28.464 "trsvcid": "42888" 00:13:28.464 }, 00:13:28.464 "auth": { 00:13:28.464 "state": "completed", 00:13:28.464 "digest": "sha256", 00:13:28.464 "dhgroup": "ffdhe3072" 00:13:28.464 } 00:13:28.464 } 00:13:28.464 ]' 00:13:28.464 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.464 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.465 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.465 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:28.465 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.722 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.722 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.722 17:38:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.722 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:28.722 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:29.654 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.654 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:29.654 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.654 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.654 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.654 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.654 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:29.654 17:38:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.911 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:30.168 00:13:30.168 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.169 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.169 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.426 { 00:13:30.426 "cntlid": 23, 00:13:30.426 "qid": 0, 00:13:30.426 "state": "enabled", 00:13:30.426 "thread": "nvmf_tgt_poll_group_000", 00:13:30.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:30.426 "listen_address": { 00:13:30.426 "trtype": "RDMA", 00:13:30.426 "adrfam": "IPv4", 00:13:30.426 "traddr": "192.168.100.8", 00:13:30.426 "trsvcid": "4420" 00:13:30.426 }, 00:13:30.426 "peer_address": { 00:13:30.426 "trtype": "RDMA", 00:13:30.426 "adrfam": "IPv4", 00:13:30.426 "traddr": "192.168.100.8", 00:13:30.426 "trsvcid": "44279" 00:13:30.426 }, 00:13:30.426 "auth": { 00:13:30.426 "state": "completed", 00:13:30.426 "digest": "sha256", 00:13:30.426 "dhgroup": "ffdhe3072" 00:13:30.426 } 00:13:30.426 } 00:13:30.426 ]' 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.426 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.683 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:30.684 17:38:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:31.248 17:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.506 17:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:31.506 17:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.506 17:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.506 17:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.506 17:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:31.506 17:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.506 17:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:31.506 17:38:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.763 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.021 00:13:32.021 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.021 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.021 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.278 { 00:13:32.278 "cntlid": 25, 00:13:32.278 "qid": 0, 00:13:32.278 "state": "enabled", 00:13:32.278 "thread": "nvmf_tgt_poll_group_000", 00:13:32.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:32.278 "listen_address": { 00:13:32.278 "trtype": "RDMA", 00:13:32.278 "adrfam": "IPv4", 00:13:32.278 "traddr": "192.168.100.8", 00:13:32.278 "trsvcid": "4420" 00:13:32.278 }, 00:13:32.278 "peer_address": { 00:13:32.278 "trtype": "RDMA", 00:13:32.278 "adrfam": "IPv4", 00:13:32.278 "traddr": "192.168.100.8", 00:13:32.278 "trsvcid": "45223" 00:13:32.278 }, 00:13:32.278 "auth": { 00:13:32.278 "state": "completed", 00:13:32.278 "digest": "sha256", 00:13:32.278 "dhgroup": "ffdhe4096" 00:13:32.278 } 00:13:32.278 } 00:13:32.278 ]' 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.278 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.535 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:32.535 17:38:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:33.465 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.465 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:33.465 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.465 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.465 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.465 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.465 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.465 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.722 17:38:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.980 00:13:33.980 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.980 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.980 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.237 { 00:13:34.237 "cntlid": 27, 00:13:34.237 "qid": 0, 00:13:34.237 "state": "enabled", 00:13:34.237 "thread": "nvmf_tgt_poll_group_000", 00:13:34.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:34.237 "listen_address": { 00:13:34.237 "trtype": "RDMA", 00:13:34.237 "adrfam": "IPv4", 00:13:34.237 "traddr": "192.168.100.8", 00:13:34.237 "trsvcid": "4420" 00:13:34.237 }, 00:13:34.237 "peer_address": { 00:13:34.237 "trtype": "RDMA", 00:13:34.237 "adrfam": "IPv4", 00:13:34.237 "traddr": "192.168.100.8", 00:13:34.237 "trsvcid": "47017" 00:13:34.237 }, 00:13:34.237 "auth": { 00:13:34.237 "state": "completed", 00:13:34.237 "digest": "sha256", 00:13:34.237 "dhgroup": "ffdhe4096" 00:13:34.237 } 00:13:34.237 } 00:13:34.237 ]' 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.237 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:34.494 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:34.494 17:38:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:35.058 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.314 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:35.314 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.314 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.314 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.314 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.314 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:35.314 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.571 17:38:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:35.830 00:13:35.830 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.830 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.830 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.087 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.088 { 00:13:36.088 "cntlid": 29, 00:13:36.088 "qid": 0, 00:13:36.088 "state": "enabled", 00:13:36.088 "thread": "nvmf_tgt_poll_group_000", 00:13:36.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:36.088 "listen_address": { 00:13:36.088 "trtype": "RDMA", 00:13:36.088 "adrfam": "IPv4", 00:13:36.088 "traddr": "192.168.100.8", 00:13:36.088 "trsvcid": "4420" 00:13:36.088 }, 00:13:36.088 "peer_address": { 00:13:36.088 "trtype": "RDMA", 00:13:36.088 "adrfam": "IPv4", 00:13:36.088 "traddr": "192.168.100.8", 00:13:36.088 "trsvcid": "51788" 00:13:36.088 }, 00:13:36.088 "auth": { 00:13:36.088 "state": "completed", 00:13:36.088 "digest": "sha256", 00:13:36.088 "dhgroup": "ffdhe4096" 00:13:36.088 } 00:13:36.088 } 00:13:36.088 ]' 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:36.088 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.345 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.345 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.345 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.345 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:36.345 17:38:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:37.278 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:37.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:37.278 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:37.278 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.278 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.278 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.278 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:37.278 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:37.278 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.536 17:38:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:37.793 00:13:37.793 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.793 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.793 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.051 { 00:13:38.051 "cntlid": 31, 00:13:38.051 "qid": 0, 00:13:38.051 "state": "enabled", 00:13:38.051 "thread": "nvmf_tgt_poll_group_000", 00:13:38.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:38.051 "listen_address": { 00:13:38.051 "trtype": "RDMA", 00:13:38.051 "adrfam": "IPv4", 00:13:38.051 "traddr": "192.168.100.8", 00:13:38.051 "trsvcid": "4420" 00:13:38.051 }, 00:13:38.051 "peer_address": { 00:13:38.051 "trtype": "RDMA", 00:13:38.051 "adrfam": "IPv4", 00:13:38.051 "traddr": "192.168.100.8", 00:13:38.051 "trsvcid": "32791" 00:13:38.051 }, 00:13:38.051 "auth": { 00:13:38.051 "state": "completed", 00:13:38.051 "digest": "sha256", 00:13:38.051 "dhgroup": "ffdhe4096" 00:13:38.051 } 00:13:38.051 } 00:13:38.051 ]' 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.051 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.310 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:38.310 17:38:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:39.243 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.243 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:39.243 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.243 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.243 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.243 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:39.243 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.243 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:39.243 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.502 17:38:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.760 00:13:39.760 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.760 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.760 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.017 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.017 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.017 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.017 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.017 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.017 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.017 { 00:13:40.017 "cntlid": 33, 00:13:40.017 "qid": 0, 00:13:40.017 "state": "enabled", 00:13:40.017 "thread": "nvmf_tgt_poll_group_000", 00:13:40.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:40.017 "listen_address": { 00:13:40.017 "trtype": "RDMA", 00:13:40.017 "adrfam": "IPv4", 00:13:40.017 "traddr": "192.168.100.8", 00:13:40.017 "trsvcid": "4420" 00:13:40.018 }, 00:13:40.018 "peer_address": { 00:13:40.018 "trtype": "RDMA", 00:13:40.018 "adrfam": "IPv4", 00:13:40.018 "traddr": "192.168.100.8", 00:13:40.018 "trsvcid": "44070" 00:13:40.018 }, 00:13:40.018 "auth": { 00:13:40.018 "state": "completed", 00:13:40.018 "digest": "sha256", 00:13:40.018 "dhgroup": "ffdhe6144" 00:13:40.018 } 00:13:40.018 } 00:13:40.018 ]' 00:13:40.018 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.018 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:40.018 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.018 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:40.018 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.275 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.275 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.275 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.275 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:40.275 17:38:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:41.209 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.209 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:41.209 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.209 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.209 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.209 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.209 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:41.209 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.467 17:38:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.725 00:13:41.725 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.725 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.725 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.983 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.983 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.983 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.983 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.983 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.983 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.983 { 00:13:41.983 "cntlid": 35, 00:13:41.983 "qid": 0, 00:13:41.983 "state": "enabled", 00:13:41.983 "thread": "nvmf_tgt_poll_group_000", 00:13:41.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:41.983 "listen_address": { 00:13:41.983 "trtype": "RDMA", 00:13:41.983 "adrfam": "IPv4", 00:13:41.983 "traddr": "192.168.100.8", 00:13:41.983 "trsvcid": "4420" 00:13:41.983 }, 00:13:41.983 "peer_address": { 00:13:41.983 "trtype": "RDMA", 00:13:41.983 "adrfam": "IPv4", 00:13:41.983 "traddr": "192.168.100.8", 00:13:41.983 "trsvcid": "50885" 00:13:41.983 }, 00:13:41.983 "auth": { 00:13:41.983 "state": "completed", 00:13:41.983 "digest": "sha256", 00:13:41.983 "dhgroup": "ffdhe6144" 00:13:41.983 } 00:13:41.983 } 00:13:41.983 ]' 00:13:41.983 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.983 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.983 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.241 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:42.241 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.241 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.241 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.241 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.498 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:42.499 17:38:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:43.064 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.323 17:38:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.891 00:13:43.891 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.891 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:43.891 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.892 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.150 { 00:13:44.150 "cntlid": 37, 00:13:44.150 "qid": 0, 00:13:44.150 "state": "enabled", 00:13:44.150 "thread": "nvmf_tgt_poll_group_000", 00:13:44.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:44.150 "listen_address": { 00:13:44.150 "trtype": "RDMA", 00:13:44.150 "adrfam": "IPv4", 00:13:44.150 "traddr": "192.168.100.8", 00:13:44.150 "trsvcid": "4420" 00:13:44.150 }, 00:13:44.150 "peer_address": { 00:13:44.150 "trtype": "RDMA", 00:13:44.150 "adrfam": "IPv4", 00:13:44.150 "traddr": "192.168.100.8", 00:13:44.150 "trsvcid": "48574" 00:13:44.150 }, 00:13:44.150 "auth": { 00:13:44.150 "state": "completed", 00:13:44.150 "digest": "sha256", 00:13:44.150 "dhgroup": "ffdhe6144" 00:13:44.150 } 00:13:44.150 } 00:13:44.150 ]' 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.150 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.408 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:44.408 17:38:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:44.974 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.232 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:45.232 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.232 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.232 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.232 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.232 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:45.232 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:45.489 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:45.489 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.489 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:45.489 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:45.489 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:45.489 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.489 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:13:45.490 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.490 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.490 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.490 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:45.490 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.490 17:38:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.748 00:13:45.748 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:45.748 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.748 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.006 { 00:13:46.006 "cntlid": 39, 00:13:46.006 "qid": 0, 00:13:46.006 "state": "enabled", 00:13:46.006 "thread": "nvmf_tgt_poll_group_000", 00:13:46.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:46.006 "listen_address": { 00:13:46.006 "trtype": "RDMA", 00:13:46.006 "adrfam": "IPv4", 00:13:46.006 "traddr": "192.168.100.8", 00:13:46.006 "trsvcid": "4420" 00:13:46.006 }, 00:13:46.006 "peer_address": { 00:13:46.006 "trtype": "RDMA", 00:13:46.006 "adrfam": "IPv4", 00:13:46.006 "traddr": "192.168.100.8", 00:13:46.006 "trsvcid": "37307" 00:13:46.006 }, 00:13:46.006 "auth": { 00:13:46.006 "state": "completed", 00:13:46.006 "digest": "sha256", 00:13:46.006 "dhgroup": "ffdhe6144" 00:13:46.006 } 00:13:46.006 } 00:13:46.006 ]' 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.006 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.264 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:46.264 17:38:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:47.200 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.200 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:47.200 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.200 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.200 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.200 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:47.200 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:47.200 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:47.200 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.459 17:38:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.030 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.030 { 00:13:48.030 "cntlid": 41, 00:13:48.030 "qid": 0, 00:13:48.030 "state": "enabled", 00:13:48.030 "thread": "nvmf_tgt_poll_group_000", 00:13:48.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:48.030 "listen_address": { 00:13:48.030 "trtype": "RDMA", 00:13:48.030 "adrfam": "IPv4", 00:13:48.030 "traddr": "192.168.100.8", 00:13:48.030 "trsvcid": "4420" 00:13:48.030 }, 00:13:48.030 "peer_address": { 00:13:48.030 "trtype": "RDMA", 00:13:48.030 "adrfam": "IPv4", 00:13:48.030 "traddr": "192.168.100.8", 00:13:48.030 "trsvcid": "53664" 00:13:48.030 }, 00:13:48.030 "auth": { 00:13:48.030 "state": "completed", 00:13:48.030 "digest": "sha256", 00:13:48.030 "dhgroup": "ffdhe8192" 00:13:48.030 } 00:13:48.030 } 00:13:48.030 ]' 00:13:48.030 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.435 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:48.435 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.435 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:48.435 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.435 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.435 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.435 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:48.435 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:48.435 17:38:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:49.006 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:49.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:49.266 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:49.266 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.266 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.266 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.266 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:49.266 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:49.266 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:49.524 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:49.524 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.524 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:49.524 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:49.524 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:49.524 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.524 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.524 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.525 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.525 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.525 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.525 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:49.525 17:38:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.091 00:13:50.091 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.091 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.091 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.349 { 00:13:50.349 "cntlid": 43, 00:13:50.349 "qid": 0, 00:13:50.349 "state": "enabled", 00:13:50.349 "thread": "nvmf_tgt_poll_group_000", 00:13:50.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:50.349 "listen_address": { 00:13:50.349 "trtype": "RDMA", 00:13:50.349 "adrfam": "IPv4", 00:13:50.349 "traddr": "192.168.100.8", 00:13:50.349 "trsvcid": "4420" 00:13:50.349 }, 00:13:50.349 "peer_address": { 00:13:50.349 "trtype": "RDMA", 00:13:50.349 "adrfam": "IPv4", 00:13:50.349 "traddr": "192.168.100.8", 00:13:50.349 "trsvcid": "53756" 00:13:50.349 }, 00:13:50.349 "auth": { 00:13:50.349 "state": "completed", 00:13:50.349 "digest": "sha256", 00:13:50.349 "dhgroup": "ffdhe8192" 00:13:50.349 } 00:13:50.349 } 00:13:50.349 ]' 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:50.349 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.350 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.350 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.350 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.608 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:50.608 17:38:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:51.174 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.432 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:51.432 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.432 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.432 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.432 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:51.432 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:51.432 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:51.689 17:38:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.253 00:13:52.253 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:52.253 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:52.253 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.253 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.253 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:52.253 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.253 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:52.510 { 00:13:52.510 "cntlid": 45, 00:13:52.510 "qid": 0, 00:13:52.510 "state": "enabled", 00:13:52.510 "thread": "nvmf_tgt_poll_group_000", 00:13:52.510 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:52.510 "listen_address": { 00:13:52.510 "trtype": "RDMA", 00:13:52.510 "adrfam": "IPv4", 00:13:52.510 "traddr": "192.168.100.8", 00:13:52.510 "trsvcid": "4420" 00:13:52.510 }, 00:13:52.510 "peer_address": { 00:13:52.510 "trtype": "RDMA", 00:13:52.510 "adrfam": "IPv4", 00:13:52.510 "traddr": "192.168.100.8", 00:13:52.510 "trsvcid": "49630" 00:13:52.510 }, 00:13:52.510 "auth": { 00:13:52.510 "state": "completed", 00:13:52.510 "digest": "sha256", 00:13:52.510 "dhgroup": "ffdhe8192" 00:13:52.510 } 00:13:52.510 } 00:13:52.510 ]' 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:52.510 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.767 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:52.767 17:38:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:13:53.332 17:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.589 17:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:53.589 17:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.589 17:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.589 17:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.589 17:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:53.590 17:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:53.590 17:38:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:53.846 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:13:53.846 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:53.846 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:53.846 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:53.846 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:53.846 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.846 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:13:53.846 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.847 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.847 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.847 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:53.847 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.847 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.411 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:54.411 { 00:13:54.411 "cntlid": 47, 00:13:54.411 "qid": 0, 00:13:54.411 "state": "enabled", 00:13:54.411 "thread": "nvmf_tgt_poll_group_000", 00:13:54.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:54.411 "listen_address": { 00:13:54.411 "trtype": "RDMA", 00:13:54.411 "adrfam": "IPv4", 00:13:54.411 "traddr": "192.168.100.8", 00:13:54.411 "trsvcid": "4420" 00:13:54.411 }, 00:13:54.411 "peer_address": { 00:13:54.411 "trtype": "RDMA", 00:13:54.411 "adrfam": "IPv4", 00:13:54.411 "traddr": "192.168.100.8", 00:13:54.411 "trsvcid": "45139" 00:13:54.411 }, 00:13:54.411 "auth": { 00:13:54.411 "state": "completed", 00:13:54.411 "digest": "sha256", 00:13:54.411 "dhgroup": "ffdhe8192" 00:13:54.411 } 00:13:54.411 } 00:13:54.411 ]' 00:13:54.411 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.668 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.668 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.668 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:54.668 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.668 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.668 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.668 17:38:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.925 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:54.925 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:13:55.489 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.746 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:55.746 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.746 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.746 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.746 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:55.746 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:55.746 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:55.746 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:55.746 17:38:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.003 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.260 00:13:56.260 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:56.260 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.260 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:56.260 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.260 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.260 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.260 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:56.518 { 00:13:56.518 "cntlid": 49, 00:13:56.518 "qid": 0, 00:13:56.518 "state": "enabled", 00:13:56.518 "thread": "nvmf_tgt_poll_group_000", 00:13:56.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:56.518 "listen_address": { 00:13:56.518 "trtype": "RDMA", 00:13:56.518 "adrfam": "IPv4", 00:13:56.518 "traddr": "192.168.100.8", 00:13:56.518 "trsvcid": "4420" 00:13:56.518 }, 00:13:56.518 "peer_address": { 00:13:56.518 "trtype": "RDMA", 00:13:56.518 "adrfam": "IPv4", 00:13:56.518 "traddr": "192.168.100.8", 00:13:56.518 "trsvcid": "43935" 00:13:56.518 }, 00:13:56.518 "auth": { 00:13:56.518 "state": "completed", 00:13:56.518 "digest": "sha384", 00:13:56.518 "dhgroup": "null" 00:13:56.518 } 00:13:56.518 } 00:13:56.518 ]' 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.518 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.775 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:56.776 17:38:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:13:57.341 17:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:57.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:57.598 17:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:57.598 17:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.598 17:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.598 17:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.598 17:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:57.598 17:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:57.598 17:38:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:57.855 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:58.113 00:13:58.113 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:58.113 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.113 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:58.370 { 00:13:58.370 "cntlid": 51, 00:13:58.370 "qid": 0, 00:13:58.370 "state": "enabled", 00:13:58.370 "thread": "nvmf_tgt_poll_group_000", 00:13:58.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:13:58.370 "listen_address": { 00:13:58.370 "trtype": "RDMA", 00:13:58.370 "adrfam": "IPv4", 00:13:58.370 "traddr": "192.168.100.8", 00:13:58.370 "trsvcid": "4420" 00:13:58.370 }, 00:13:58.370 "peer_address": { 00:13:58.370 "trtype": "RDMA", 00:13:58.370 "adrfam": "IPv4", 00:13:58.370 "traddr": "192.168.100.8", 00:13:58.370 "trsvcid": "54836" 00:13:58.370 }, 00:13:58.370 "auth": { 00:13:58.370 "state": "completed", 00:13:58.370 "digest": "sha384", 00:13:58.370 "dhgroup": "null" 00:13:58.370 } 00:13:58.370 } 00:13:58.370 ]' 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.370 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.626 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:58.626 17:38:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:13:59.190 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:59.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:59.447 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:13:59.447 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.447 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.447 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.447 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:59.447 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.447 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:13:59.704 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:13:59.704 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.704 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:13:59.704 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:59.704 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:59.704 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.704 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.704 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.705 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.705 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.705 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.705 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.705 17:38:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:59.962 00:13:59.962 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.962 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:59.962 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.219 { 00:14:00.219 "cntlid": 53, 00:14:00.219 "qid": 0, 00:14:00.219 "state": "enabled", 00:14:00.219 "thread": "nvmf_tgt_poll_group_000", 00:14:00.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:00.219 "listen_address": { 00:14:00.219 "trtype": "RDMA", 00:14:00.219 "adrfam": "IPv4", 00:14:00.219 "traddr": "192.168.100.8", 00:14:00.219 "trsvcid": "4420" 00:14:00.219 }, 00:14:00.219 "peer_address": { 00:14:00.219 "trtype": "RDMA", 00:14:00.219 "adrfam": "IPv4", 00:14:00.219 "traddr": "192.168.100.8", 00:14:00.219 "trsvcid": "37820" 00:14:00.219 }, 00:14:00.219 "auth": { 00:14:00.219 "state": "completed", 00:14:00.219 "digest": "sha384", 00:14:00.219 "dhgroup": "null" 00:14:00.219 } 00:14:00.219 } 00:14:00.219 ]' 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.219 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.476 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:00.476 17:38:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:01.408 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.408 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:01.408 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.408 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.408 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.408 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.408 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.408 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.666 17:38:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:01.924 00:14:01.924 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:01.924 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:01.924 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.181 { 00:14:02.181 "cntlid": 55, 00:14:02.181 "qid": 0, 00:14:02.181 "state": "enabled", 00:14:02.181 "thread": "nvmf_tgt_poll_group_000", 00:14:02.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:02.181 "listen_address": { 00:14:02.181 "trtype": "RDMA", 00:14:02.181 "adrfam": "IPv4", 00:14:02.181 "traddr": "192.168.100.8", 00:14:02.181 "trsvcid": "4420" 00:14:02.181 }, 00:14:02.181 "peer_address": { 00:14:02.181 "trtype": "RDMA", 00:14:02.181 "adrfam": "IPv4", 00:14:02.181 "traddr": "192.168.100.8", 00:14:02.181 "trsvcid": "55413" 00:14:02.181 }, 00:14:02.181 "auth": { 00:14:02.181 "state": "completed", 00:14:02.181 "digest": "sha384", 00:14:02.181 "dhgroup": "null" 00:14:02.181 } 00:14:02.181 } 00:14:02.181 ]' 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.181 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.439 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:02.439 17:38:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:03.003 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.260 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.260 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:03.260 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.260 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.260 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.260 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:03.260 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.260 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:03.260 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.518 17:38:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:03.777 00:14:03.777 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:03.777 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.777 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.035 { 00:14:04.035 "cntlid": 57, 00:14:04.035 "qid": 0, 00:14:04.035 "state": "enabled", 00:14:04.035 "thread": "nvmf_tgt_poll_group_000", 00:14:04.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:04.035 "listen_address": { 00:14:04.035 "trtype": "RDMA", 00:14:04.035 "adrfam": "IPv4", 00:14:04.035 "traddr": "192.168.100.8", 00:14:04.035 "trsvcid": "4420" 00:14:04.035 }, 00:14:04.035 "peer_address": { 00:14:04.035 "trtype": "RDMA", 00:14:04.035 "adrfam": "IPv4", 00:14:04.035 "traddr": "192.168.100.8", 00:14:04.035 "trsvcid": "41343" 00:14:04.035 }, 00:14:04.035 "auth": { 00:14:04.035 "state": "completed", 00:14:04.035 "digest": "sha384", 00:14:04.035 "dhgroup": "ffdhe2048" 00:14:04.035 } 00:14:04.035 } 00:14:04.035 ]' 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.035 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.293 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:04.293 17:38:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:05.228 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:05.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:05.228 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:05.228 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.228 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.228 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.228 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:05.228 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:05.228 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.487 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:05.745 00:14:05.745 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:05.745 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:05.745 17:38:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.003 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:06.004 { 00:14:06.004 "cntlid": 59, 00:14:06.004 "qid": 0, 00:14:06.004 "state": "enabled", 00:14:06.004 "thread": "nvmf_tgt_poll_group_000", 00:14:06.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:06.004 "listen_address": { 00:14:06.004 "trtype": "RDMA", 00:14:06.004 "adrfam": "IPv4", 00:14:06.004 "traddr": "192.168.100.8", 00:14:06.004 "trsvcid": "4420" 00:14:06.004 }, 00:14:06.004 "peer_address": { 00:14:06.004 "trtype": "RDMA", 00:14:06.004 "adrfam": "IPv4", 00:14:06.004 "traddr": "192.168.100.8", 00:14:06.004 "trsvcid": "56242" 00:14:06.004 }, 00:14:06.004 "auth": { 00:14:06.004 "state": "completed", 00:14:06.004 "digest": "sha384", 00:14:06.004 "dhgroup": "ffdhe2048" 00:14:06.004 } 00:14:06.004 } 00:14:06.004 ]' 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:06.004 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.261 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:06.261 17:38:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:06.826 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.084 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:07.084 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.084 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.084 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.084 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:07.084 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:07.084 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.342 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.599 00:14:07.599 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:07.599 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.599 17:38:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.856 { 00:14:07.856 "cntlid": 61, 00:14:07.856 "qid": 0, 00:14:07.856 "state": "enabled", 00:14:07.856 "thread": "nvmf_tgt_poll_group_000", 00:14:07.856 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:07.856 "listen_address": { 00:14:07.856 "trtype": "RDMA", 00:14:07.856 "adrfam": "IPv4", 00:14:07.856 "traddr": "192.168.100.8", 00:14:07.856 "trsvcid": "4420" 00:14:07.856 }, 00:14:07.856 "peer_address": { 00:14:07.856 "trtype": "RDMA", 00:14:07.856 "adrfam": "IPv4", 00:14:07.856 "traddr": "192.168.100.8", 00:14:07.856 "trsvcid": "40451" 00:14:07.856 }, 00:14:07.856 "auth": { 00:14:07.856 "state": "completed", 00:14:07.856 "digest": "sha384", 00:14:07.856 "dhgroup": "ffdhe2048" 00:14:07.856 } 00:14:07.856 } 00:14:07.856 ]' 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.856 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:08.113 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:08.113 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:08.113 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.113 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:08.113 17:38:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:09.041 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.041 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:09.041 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.041 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.041 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.041 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:09.041 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:09.041 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.299 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:09.556 00:14:09.556 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:09.556 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.556 17:38:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.813 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.813 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.813 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.813 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.813 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.813 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.813 { 00:14:09.813 "cntlid": 63, 00:14:09.813 "qid": 0, 00:14:09.813 "state": "enabled", 00:14:09.813 "thread": "nvmf_tgt_poll_group_000", 00:14:09.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:09.813 "listen_address": { 00:14:09.813 "trtype": "RDMA", 00:14:09.813 "adrfam": "IPv4", 00:14:09.813 "traddr": "192.168.100.8", 00:14:09.813 "trsvcid": "4420" 00:14:09.813 }, 00:14:09.813 "peer_address": { 00:14:09.813 "trtype": "RDMA", 00:14:09.813 "adrfam": "IPv4", 00:14:09.813 "traddr": "192.168.100.8", 00:14:09.813 "trsvcid": "60555" 00:14:09.813 }, 00:14:09.813 "auth": { 00:14:09.814 "state": "completed", 00:14:09.814 "digest": "sha384", 00:14:09.814 "dhgroup": "ffdhe2048" 00:14:09.814 } 00:14:09.814 } 00:14:09.814 ]' 00:14:09.814 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.814 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.814 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.814 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:09.814 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:10.071 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.071 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.071 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.071 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:10.071 17:38:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:11.004 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.004 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:11.004 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.004 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.004 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.004 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.004 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:11.004 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:11.004 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.262 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:11.520 00:14:11.521 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:11.521 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.521 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:11.779 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.779 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.779 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.779 17:38:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.779 { 00:14:11.779 "cntlid": 65, 00:14:11.779 "qid": 0, 00:14:11.779 "state": "enabled", 00:14:11.779 "thread": "nvmf_tgt_poll_group_000", 00:14:11.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:11.779 "listen_address": { 00:14:11.779 "trtype": "RDMA", 00:14:11.779 "adrfam": "IPv4", 00:14:11.779 "traddr": "192.168.100.8", 00:14:11.779 "trsvcid": "4420" 00:14:11.779 }, 00:14:11.779 "peer_address": { 00:14:11.779 "trtype": "RDMA", 00:14:11.779 "adrfam": "IPv4", 00:14:11.779 "traddr": "192.168.100.8", 00:14:11.779 "trsvcid": "33805" 00:14:11.779 }, 00:14:11.779 "auth": { 00:14:11.779 "state": "completed", 00:14:11.779 "digest": "sha384", 00:14:11.779 "dhgroup": "ffdhe3072" 00:14:11.779 } 00:14:11.779 } 00:14:11.779 ]' 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.779 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.038 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:12.038 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:12.603 17:38:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.861 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:12.861 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.861 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.861 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.861 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.861 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:12.861 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.119 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:13.377 00:14:13.377 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:13.377 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:13.377 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.634 { 00:14:13.634 "cntlid": 67, 00:14:13.634 "qid": 0, 00:14:13.634 "state": "enabled", 00:14:13.634 "thread": "nvmf_tgt_poll_group_000", 00:14:13.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:13.634 "listen_address": { 00:14:13.634 "trtype": "RDMA", 00:14:13.634 "adrfam": "IPv4", 00:14:13.634 "traddr": "192.168.100.8", 00:14:13.634 "trsvcid": "4420" 00:14:13.634 }, 00:14:13.634 "peer_address": { 00:14:13.634 "trtype": "RDMA", 00:14:13.634 "adrfam": "IPv4", 00:14:13.634 "traddr": "192.168.100.8", 00:14:13.634 "trsvcid": "38991" 00:14:13.634 }, 00:14:13.634 "auth": { 00:14:13.634 "state": "completed", 00:14:13.634 "digest": "sha384", 00:14:13.634 "dhgroup": "ffdhe3072" 00:14:13.634 } 00:14:13.634 } 00:14:13.634 ]' 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:13.634 17:38:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.891 17:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.891 17:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.891 17:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.892 17:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:13.892 17:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:14.822 17:38:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.822 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.822 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:14.822 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.822 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.822 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.822 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.822 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:14.822 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.079 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:15.336 00:14:15.336 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.336 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.336 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.593 { 00:14:15.593 "cntlid": 69, 00:14:15.593 "qid": 0, 00:14:15.593 "state": "enabled", 00:14:15.593 "thread": "nvmf_tgt_poll_group_000", 00:14:15.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:15.593 "listen_address": { 00:14:15.593 "trtype": "RDMA", 00:14:15.593 "adrfam": "IPv4", 00:14:15.593 "traddr": "192.168.100.8", 00:14:15.593 "trsvcid": "4420" 00:14:15.593 }, 00:14:15.593 "peer_address": { 00:14:15.593 "trtype": "RDMA", 00:14:15.593 "adrfam": "IPv4", 00:14:15.593 "traddr": "192.168.100.8", 00:14:15.593 "trsvcid": "34038" 00:14:15.593 }, 00:14:15.593 "auth": { 00:14:15.593 "state": "completed", 00:14:15.593 "digest": "sha384", 00:14:15.593 "dhgroup": "ffdhe3072" 00:14:15.593 } 00:14:15.593 } 00:14:15.593 ]' 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.593 17:38:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.850 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:15.850 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:16.413 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.669 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:16.669 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.669 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.669 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.670 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.670 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:16.670 17:38:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.927 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.184 00:14:17.184 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.184 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.184 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.441 { 00:14:17.441 "cntlid": 71, 00:14:17.441 "qid": 0, 00:14:17.441 "state": "enabled", 00:14:17.441 "thread": "nvmf_tgt_poll_group_000", 00:14:17.441 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:17.441 "listen_address": { 00:14:17.441 "trtype": "RDMA", 00:14:17.441 "adrfam": "IPv4", 00:14:17.441 "traddr": "192.168.100.8", 00:14:17.441 "trsvcid": "4420" 00:14:17.441 }, 00:14:17.441 "peer_address": { 00:14:17.441 "trtype": "RDMA", 00:14:17.441 "adrfam": "IPv4", 00:14:17.441 "traddr": "192.168.100.8", 00:14:17.441 "trsvcid": "37672" 00:14:17.441 }, 00:14:17.441 "auth": { 00:14:17.441 "state": "completed", 00:14:17.441 "digest": "sha384", 00:14:17.441 "dhgroup": "ffdhe3072" 00:14:17.441 } 00:14:17.441 } 00:14:17.441 ]' 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.441 17:38:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.698 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:17.698 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:18.630 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.630 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:18.630 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.630 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.630 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.630 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.630 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.630 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:18.630 17:38:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:18.888 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.146 00:14:19.146 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.146 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.146 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.403 { 00:14:19.403 "cntlid": 73, 00:14:19.403 "qid": 0, 00:14:19.403 "state": "enabled", 00:14:19.403 "thread": "nvmf_tgt_poll_group_000", 00:14:19.403 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:19.403 "listen_address": { 00:14:19.403 "trtype": "RDMA", 00:14:19.403 "adrfam": "IPv4", 00:14:19.403 "traddr": "192.168.100.8", 00:14:19.403 "trsvcid": "4420" 00:14:19.403 }, 00:14:19.403 "peer_address": { 00:14:19.403 "trtype": "RDMA", 00:14:19.403 "adrfam": "IPv4", 00:14:19.403 "traddr": "192.168.100.8", 00:14:19.403 "trsvcid": "53711" 00:14:19.403 }, 00:14:19.403 "auth": { 00:14:19.403 "state": "completed", 00:14:19.403 "digest": "sha384", 00:14:19.403 "dhgroup": "ffdhe4096" 00:14:19.403 } 00:14:19.403 } 00:14:19.403 ]' 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.403 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.660 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:19.660 17:38:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:20.225 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.482 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:20.482 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.482 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.482 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.482 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.482 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.482 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.738 17:38:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:20.995 00:14:20.995 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.995 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.995 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.251 { 00:14:21.251 "cntlid": 75, 00:14:21.251 "qid": 0, 00:14:21.251 "state": "enabled", 00:14:21.251 "thread": "nvmf_tgt_poll_group_000", 00:14:21.251 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:21.251 "listen_address": { 00:14:21.251 "trtype": "RDMA", 00:14:21.251 "adrfam": "IPv4", 00:14:21.251 "traddr": "192.168.100.8", 00:14:21.251 "trsvcid": "4420" 00:14:21.251 }, 00:14:21.251 "peer_address": { 00:14:21.251 "trtype": "RDMA", 00:14:21.251 "adrfam": "IPv4", 00:14:21.251 "traddr": "192.168.100.8", 00:14:21.251 "trsvcid": "41576" 00:14:21.251 }, 00:14:21.251 "auth": { 00:14:21.251 "state": "completed", 00:14:21.251 "digest": "sha384", 00:14:21.251 "dhgroup": "ffdhe4096" 00:14:21.251 } 00:14:21.251 } 00:14:21.251 ]' 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.251 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.508 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:21.508 17:38:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:22.438 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.438 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:22.438 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.438 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.438 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.438 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:22.438 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.438 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.695 17:39:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:22.952 00:14:22.952 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.952 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:22.952 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.209 { 00:14:23.209 "cntlid": 77, 00:14:23.209 "qid": 0, 00:14:23.209 "state": "enabled", 00:14:23.209 "thread": "nvmf_tgt_poll_group_000", 00:14:23.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:23.209 "listen_address": { 00:14:23.209 "trtype": "RDMA", 00:14:23.209 "adrfam": "IPv4", 00:14:23.209 "traddr": "192.168.100.8", 00:14:23.209 "trsvcid": "4420" 00:14:23.209 }, 00:14:23.209 "peer_address": { 00:14:23.209 "trtype": "RDMA", 00:14:23.209 "adrfam": "IPv4", 00:14:23.209 "traddr": "192.168.100.8", 00:14:23.209 "trsvcid": "41648" 00:14:23.209 }, 00:14:23.209 "auth": { 00:14:23.209 "state": "completed", 00:14:23.209 "digest": "sha384", 00:14:23.209 "dhgroup": "ffdhe4096" 00:14:23.209 } 00:14:23.209 } 00:14:23.209 ]' 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:23.209 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.210 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.210 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.210 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.467 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:23.467 17:39:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:24.030 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.288 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:24.288 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.288 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.288 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.289 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.289 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.289 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:24.546 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:24.546 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.546 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:24.546 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:24.546 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:24.546 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.546 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:14:24.546 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.546 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.547 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.547 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:24.547 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.547 17:39:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:24.804 00:14:24.804 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.804 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.804 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.062 { 00:14:25.062 "cntlid": 79, 00:14:25.062 "qid": 0, 00:14:25.062 "state": "enabled", 00:14:25.062 "thread": "nvmf_tgt_poll_group_000", 00:14:25.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:25.062 "listen_address": { 00:14:25.062 "trtype": "RDMA", 00:14:25.062 "adrfam": "IPv4", 00:14:25.062 "traddr": "192.168.100.8", 00:14:25.062 "trsvcid": "4420" 00:14:25.062 }, 00:14:25.062 "peer_address": { 00:14:25.062 "trtype": "RDMA", 00:14:25.062 "adrfam": "IPv4", 00:14:25.062 "traddr": "192.168.100.8", 00:14:25.062 "trsvcid": "42248" 00:14:25.062 }, 00:14:25.062 "auth": { 00:14:25.062 "state": "completed", 00:14:25.062 "digest": "sha384", 00:14:25.062 "dhgroup": "ffdhe4096" 00:14:25.062 } 00:14:25.062 } 00:14:25.062 ]' 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.062 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.320 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.320 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.320 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.320 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:25.320 17:39:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:26.335 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.335 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:26.335 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.335 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.335 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.335 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.335 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.335 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:26.335 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.594 17:39:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:26.851 00:14:26.851 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.851 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.851 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.110 { 00:14:27.110 "cntlid": 81, 00:14:27.110 "qid": 0, 00:14:27.110 "state": "enabled", 00:14:27.110 "thread": "nvmf_tgt_poll_group_000", 00:14:27.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:27.110 "listen_address": { 00:14:27.110 "trtype": "RDMA", 00:14:27.110 "adrfam": "IPv4", 00:14:27.110 "traddr": "192.168.100.8", 00:14:27.110 "trsvcid": "4420" 00:14:27.110 }, 00:14:27.110 "peer_address": { 00:14:27.110 "trtype": "RDMA", 00:14:27.110 "adrfam": "IPv4", 00:14:27.110 "traddr": "192.168.100.8", 00:14:27.110 "trsvcid": "58304" 00:14:27.110 }, 00:14:27.110 "auth": { 00:14:27.110 "state": "completed", 00:14:27.110 "digest": "sha384", 00:14:27.110 "dhgroup": "ffdhe6144" 00:14:27.110 } 00:14:27.110 } 00:14:27.110 ]' 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.110 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.368 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:27.368 17:39:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:28.301 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.301 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:28.301 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.301 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.301 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.301 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.301 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:28.301 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.559 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.560 17:39:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.817 00:14:28.817 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.817 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.817 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.075 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.075 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.075 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.075 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.075 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.075 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.075 { 00:14:29.075 "cntlid": 83, 00:14:29.075 "qid": 0, 00:14:29.075 "state": "enabled", 00:14:29.075 "thread": "nvmf_tgt_poll_group_000", 00:14:29.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:29.075 "listen_address": { 00:14:29.075 "trtype": "RDMA", 00:14:29.075 "adrfam": "IPv4", 00:14:29.075 "traddr": "192.168.100.8", 00:14:29.075 "trsvcid": "4420" 00:14:29.075 }, 00:14:29.075 "peer_address": { 00:14:29.075 "trtype": "RDMA", 00:14:29.075 "adrfam": "IPv4", 00:14:29.075 "traddr": "192.168.100.8", 00:14:29.075 "trsvcid": "52231" 00:14:29.075 }, 00:14:29.075 "auth": { 00:14:29.075 "state": "completed", 00:14:29.075 "digest": "sha384", 00:14:29.075 "dhgroup": "ffdhe6144" 00:14:29.075 } 00:14:29.075 } 00:14:29.075 ]' 00:14:29.075 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.075 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.075 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.333 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:29.333 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.333 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.333 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.333 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.333 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:29.333 17:39:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:30.268 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.268 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:30.268 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.268 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.268 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.268 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.268 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:30.268 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.526 17:39:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.784 00:14:30.784 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.784 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.784 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.042 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.042 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.042 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.042 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.042 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.042 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.042 { 00:14:31.042 "cntlid": 85, 00:14:31.042 "qid": 0, 00:14:31.042 "state": "enabled", 00:14:31.042 "thread": "nvmf_tgt_poll_group_000", 00:14:31.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:31.042 "listen_address": { 00:14:31.042 "trtype": "RDMA", 00:14:31.042 "adrfam": "IPv4", 00:14:31.042 "traddr": "192.168.100.8", 00:14:31.042 "trsvcid": "4420" 00:14:31.042 }, 00:14:31.042 "peer_address": { 00:14:31.042 "trtype": "RDMA", 00:14:31.042 "adrfam": "IPv4", 00:14:31.042 "traddr": "192.168.100.8", 00:14:31.042 "trsvcid": "58150" 00:14:31.042 }, 00:14:31.042 "auth": { 00:14:31.042 "state": "completed", 00:14:31.042 "digest": "sha384", 00:14:31.042 "dhgroup": "ffdhe6144" 00:14:31.042 } 00:14:31.042 } 00:14:31.042 ]' 00:14:31.042 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.042 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.042 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.300 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:31.300 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.300 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.300 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.300 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.558 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:31.558 17:39:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:32.125 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.382 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:32.382 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.382 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.382 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.382 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.383 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:32.383 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.641 17:39:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:32.899 00:14:32.900 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.900 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.900 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.158 { 00:14:33.158 "cntlid": 87, 00:14:33.158 "qid": 0, 00:14:33.158 "state": "enabled", 00:14:33.158 "thread": "nvmf_tgt_poll_group_000", 00:14:33.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:33.158 "listen_address": { 00:14:33.158 "trtype": "RDMA", 00:14:33.158 "adrfam": "IPv4", 00:14:33.158 "traddr": "192.168.100.8", 00:14:33.158 "trsvcid": "4420" 00:14:33.158 }, 00:14:33.158 "peer_address": { 00:14:33.158 "trtype": "RDMA", 00:14:33.158 "adrfam": "IPv4", 00:14:33.158 "traddr": "192.168.100.8", 00:14:33.158 "trsvcid": "54701" 00:14:33.158 }, 00:14:33.158 "auth": { 00:14:33.158 "state": "completed", 00:14:33.158 "digest": "sha384", 00:14:33.158 "dhgroup": "ffdhe6144" 00:14:33.158 } 00:14:33.158 } 00:14:33.158 ]' 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.158 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.416 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:33.416 17:39:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:33.982 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.240 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:34.240 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.240 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.240 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.240 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:34.240 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.240 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:34.240 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.499 17:39:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.065 00:14:35.065 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.065 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.065 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.323 { 00:14:35.323 "cntlid": 89, 00:14:35.323 "qid": 0, 00:14:35.323 "state": "enabled", 00:14:35.323 "thread": "nvmf_tgt_poll_group_000", 00:14:35.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:35.323 "listen_address": { 00:14:35.323 "trtype": "RDMA", 00:14:35.323 "adrfam": "IPv4", 00:14:35.323 "traddr": "192.168.100.8", 00:14:35.323 "trsvcid": "4420" 00:14:35.323 }, 00:14:35.323 "peer_address": { 00:14:35.323 "trtype": "RDMA", 00:14:35.323 "adrfam": "IPv4", 00:14:35.323 "traddr": "192.168.100.8", 00:14:35.323 "trsvcid": "47335" 00:14:35.323 }, 00:14:35.323 "auth": { 00:14:35.323 "state": "completed", 00:14:35.323 "digest": "sha384", 00:14:35.323 "dhgroup": "ffdhe8192" 00:14:35.323 } 00:14:35.323 } 00:14:35.323 ]' 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.323 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.582 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:35.582 17:39:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:36.148 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.406 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:36.406 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.406 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.406 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.406 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.406 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:36.406 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.665 17:39:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.230 00:14:37.230 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.230 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.230 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.231 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.488 { 00:14:37.488 "cntlid": 91, 00:14:37.488 "qid": 0, 00:14:37.488 "state": "enabled", 00:14:37.488 "thread": "nvmf_tgt_poll_group_000", 00:14:37.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:37.488 "listen_address": { 00:14:37.488 "trtype": "RDMA", 00:14:37.488 "adrfam": "IPv4", 00:14:37.488 "traddr": "192.168.100.8", 00:14:37.488 "trsvcid": "4420" 00:14:37.488 }, 00:14:37.488 "peer_address": { 00:14:37.488 "trtype": "RDMA", 00:14:37.488 "adrfam": "IPv4", 00:14:37.488 "traddr": "192.168.100.8", 00:14:37.488 "trsvcid": "48204" 00:14:37.488 }, 00:14:37.488 "auth": { 00:14:37.488 "state": "completed", 00:14:37.488 "digest": "sha384", 00:14:37.488 "dhgroup": "ffdhe8192" 00:14:37.488 } 00:14:37.488 } 00:14:37.488 ]' 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.488 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.745 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:37.745 17:39:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:38.310 17:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.567 17:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:38.567 17:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.567 17:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.568 17:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.568 17:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.568 17:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.568 17:39:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:38.824 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.389 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.389 { 00:14:39.389 "cntlid": 93, 00:14:39.389 "qid": 0, 00:14:39.389 "state": "enabled", 00:14:39.389 "thread": "nvmf_tgt_poll_group_000", 00:14:39.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:39.389 "listen_address": { 00:14:39.389 "trtype": "RDMA", 00:14:39.389 "adrfam": "IPv4", 00:14:39.389 "traddr": "192.168.100.8", 00:14:39.389 "trsvcid": "4420" 00:14:39.389 }, 00:14:39.389 "peer_address": { 00:14:39.389 "trtype": "RDMA", 00:14:39.389 "adrfam": "IPv4", 00:14:39.389 "traddr": "192.168.100.8", 00:14:39.389 "trsvcid": "38834" 00:14:39.389 }, 00:14:39.389 "auth": { 00:14:39.389 "state": "completed", 00:14:39.389 "digest": "sha384", 00:14:39.389 "dhgroup": "ffdhe8192" 00:14:39.389 } 00:14:39.389 } 00:14:39.389 ]' 00:14:39.389 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.647 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.647 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.647 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:39.647 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.647 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.647 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.648 17:39:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.906 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:39.906 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:40.472 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.730 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:40.730 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.730 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.730 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.730 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.730 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:40.730 17:39:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:40.988 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:41.246 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.504 { 00:14:41.504 "cntlid": 95, 00:14:41.504 "qid": 0, 00:14:41.504 "state": "enabled", 00:14:41.504 "thread": "nvmf_tgt_poll_group_000", 00:14:41.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:41.504 "listen_address": { 00:14:41.504 "trtype": "RDMA", 00:14:41.504 "adrfam": "IPv4", 00:14:41.504 "traddr": "192.168.100.8", 00:14:41.504 "trsvcid": "4420" 00:14:41.504 }, 00:14:41.504 "peer_address": { 00:14:41.504 "trtype": "RDMA", 00:14:41.504 "adrfam": "IPv4", 00:14:41.504 "traddr": "192.168.100.8", 00:14:41.504 "trsvcid": "43394" 00:14:41.504 }, 00:14:41.504 "auth": { 00:14:41.504 "state": "completed", 00:14:41.504 "digest": "sha384", 00:14:41.504 "dhgroup": "ffdhe8192" 00:14:41.504 } 00:14:41.504 } 00:14:41.504 ]' 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.504 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.762 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.762 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:41.762 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:41.762 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.762 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.762 17:39:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.020 17:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:42.020 17:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:42.585 17:39:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.843 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:42.843 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.843 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.843 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.844 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:42.844 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:42.844 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.844 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:42.844 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.102 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.360 00:14:43.360 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.360 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.360 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.360 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.360 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.360 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.360 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.618 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.618 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.618 { 00:14:43.618 "cntlid": 97, 00:14:43.618 "qid": 0, 00:14:43.618 "state": "enabled", 00:14:43.618 "thread": "nvmf_tgt_poll_group_000", 00:14:43.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:43.618 "listen_address": { 00:14:43.618 "trtype": "RDMA", 00:14:43.618 "adrfam": "IPv4", 00:14:43.618 "traddr": "192.168.100.8", 00:14:43.618 "trsvcid": "4420" 00:14:43.618 }, 00:14:43.618 "peer_address": { 00:14:43.618 "trtype": "RDMA", 00:14:43.618 "adrfam": "IPv4", 00:14:43.618 "traddr": "192.168.100.8", 00:14:43.618 "trsvcid": "39363" 00:14:43.618 }, 00:14:43.618 "auth": { 00:14:43.618 "state": "completed", 00:14:43.618 "digest": "sha512", 00:14:43.618 "dhgroup": "null" 00:14:43.618 } 00:14:43.618 } 00:14:43.618 ]' 00:14:43.618 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.618 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:43.618 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:43.618 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:43.618 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:43.618 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.618 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.619 17:39:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.876 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:43.876 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:44.441 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.699 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:44.699 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.699 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.699 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.699 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:44.699 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:44.699 17:39:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:44.957 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:45.215 00:14:45.215 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.215 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.215 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:45.473 { 00:14:45.473 "cntlid": 99, 00:14:45.473 "qid": 0, 00:14:45.473 "state": "enabled", 00:14:45.473 "thread": "nvmf_tgt_poll_group_000", 00:14:45.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:45.473 "listen_address": { 00:14:45.473 "trtype": "RDMA", 00:14:45.473 "adrfam": "IPv4", 00:14:45.473 "traddr": "192.168.100.8", 00:14:45.473 "trsvcid": "4420" 00:14:45.473 }, 00:14:45.473 "peer_address": { 00:14:45.473 "trtype": "RDMA", 00:14:45.473 "adrfam": "IPv4", 00:14:45.473 "traddr": "192.168.100.8", 00:14:45.473 "trsvcid": "51486" 00:14:45.473 }, 00:14:45.473 "auth": { 00:14:45.473 "state": "completed", 00:14:45.473 "digest": "sha512", 00:14:45.473 "dhgroup": "null" 00:14:45.473 } 00:14:45.473 } 00:14:45.473 ]' 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.473 17:39:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.730 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:45.730 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:46.295 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.552 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:46.552 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.552 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.552 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.552 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:46.552 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:46.553 17:39:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:46.810 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:47.077 00:14:47.077 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.077 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.077 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.336 { 00:14:47.336 "cntlid": 101, 00:14:47.336 "qid": 0, 00:14:47.336 "state": "enabled", 00:14:47.336 "thread": "nvmf_tgt_poll_group_000", 00:14:47.336 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:47.336 "listen_address": { 00:14:47.336 "trtype": "RDMA", 00:14:47.336 "adrfam": "IPv4", 00:14:47.336 "traddr": "192.168.100.8", 00:14:47.336 "trsvcid": "4420" 00:14:47.336 }, 00:14:47.336 "peer_address": { 00:14:47.336 "trtype": "RDMA", 00:14:47.336 "adrfam": "IPv4", 00:14:47.336 "traddr": "192.168.100.8", 00:14:47.336 "trsvcid": "47564" 00:14:47.336 }, 00:14:47.336 "auth": { 00:14:47.336 "state": "completed", 00:14:47.336 "digest": "sha512", 00:14:47.336 "dhgroup": "null" 00:14:47.336 } 00:14:47.336 } 00:14:47.336 ]' 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.336 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.594 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:47.594 17:39:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:48.158 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.415 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:48.415 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.415 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.415 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.415 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.415 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:48.415 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:48.671 17:39:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:48.929 00:14:48.929 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.929 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.929 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.186 { 00:14:49.186 "cntlid": 103, 00:14:49.186 "qid": 0, 00:14:49.186 "state": "enabled", 00:14:49.186 "thread": "nvmf_tgt_poll_group_000", 00:14:49.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:49.186 "listen_address": { 00:14:49.186 "trtype": "RDMA", 00:14:49.186 "adrfam": "IPv4", 00:14:49.186 "traddr": "192.168.100.8", 00:14:49.186 "trsvcid": "4420" 00:14:49.186 }, 00:14:49.186 "peer_address": { 00:14:49.186 "trtype": "RDMA", 00:14:49.186 "adrfam": "IPv4", 00:14:49.186 "traddr": "192.168.100.8", 00:14:49.186 "trsvcid": "54020" 00:14:49.186 }, 00:14:49.186 "auth": { 00:14:49.186 "state": "completed", 00:14:49.186 "digest": "sha512", 00:14:49.186 "dhgroup": "null" 00:14:49.186 } 00:14:49.186 } 00:14:49.186 ]' 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.186 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.443 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:49.444 17:39:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:50.007 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.265 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:50.265 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.265 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.265 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.265 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:50.265 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.265 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:50.265 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.523 17:39:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:50.781 00:14:50.781 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.781 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.781 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.048 { 00:14:51.048 "cntlid": 105, 00:14:51.048 "qid": 0, 00:14:51.048 "state": "enabled", 00:14:51.048 "thread": "nvmf_tgt_poll_group_000", 00:14:51.048 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:51.048 "listen_address": { 00:14:51.048 "trtype": "RDMA", 00:14:51.048 "adrfam": "IPv4", 00:14:51.048 "traddr": "192.168.100.8", 00:14:51.048 "trsvcid": "4420" 00:14:51.048 }, 00:14:51.048 "peer_address": { 00:14:51.048 "trtype": "RDMA", 00:14:51.048 "adrfam": "IPv4", 00:14:51.048 "traddr": "192.168.100.8", 00:14:51.048 "trsvcid": "58989" 00:14:51.048 }, 00:14:51.048 "auth": { 00:14:51.048 "state": "completed", 00:14:51.048 "digest": "sha512", 00:14:51.048 "dhgroup": "ffdhe2048" 00:14:51.048 } 00:14:51.048 } 00:14:51.048 ]' 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.048 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.306 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:51.306 17:39:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:51.870 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.127 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:52.127 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.127 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.127 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.127 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.127 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:52.127 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.385 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.643 00:14:52.643 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.643 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.643 17:39:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.900 { 00:14:52.900 "cntlid": 107, 00:14:52.900 "qid": 0, 00:14:52.900 "state": "enabled", 00:14:52.900 "thread": "nvmf_tgt_poll_group_000", 00:14:52.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:52.900 "listen_address": { 00:14:52.900 "trtype": "RDMA", 00:14:52.900 "adrfam": "IPv4", 00:14:52.900 "traddr": "192.168.100.8", 00:14:52.900 "trsvcid": "4420" 00:14:52.900 }, 00:14:52.900 "peer_address": { 00:14:52.900 "trtype": "RDMA", 00:14:52.900 "adrfam": "IPv4", 00:14:52.900 "traddr": "192.168.100.8", 00:14:52.900 "trsvcid": "33651" 00:14:52.900 }, 00:14:52.900 "auth": { 00:14:52.900 "state": "completed", 00:14:52.900 "digest": "sha512", 00:14:52.900 "dhgroup": "ffdhe2048" 00:14:52.900 } 00:14:52.900 } 00:14:52.900 ]' 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:52.900 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:53.158 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.158 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.158 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.158 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:53.158 17:39:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:14:54.090 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.090 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:54.090 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.090 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.090 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.090 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.090 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:54.090 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.347 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.605 00:14:54.605 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:54.605 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.605 17:39:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.862 { 00:14:54.862 "cntlid": 109, 00:14:54.862 "qid": 0, 00:14:54.862 "state": "enabled", 00:14:54.862 "thread": "nvmf_tgt_poll_group_000", 00:14:54.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:54.862 "listen_address": { 00:14:54.862 "trtype": "RDMA", 00:14:54.862 "adrfam": "IPv4", 00:14:54.862 "traddr": "192.168.100.8", 00:14:54.862 "trsvcid": "4420" 00:14:54.862 }, 00:14:54.862 "peer_address": { 00:14:54.862 "trtype": "RDMA", 00:14:54.862 "adrfam": "IPv4", 00:14:54.862 "traddr": "192.168.100.8", 00:14:54.862 "trsvcid": "42086" 00:14:54.862 }, 00:14:54.862 "auth": { 00:14:54.862 "state": "completed", 00:14:54.862 "digest": "sha512", 00:14:54.862 "dhgroup": "ffdhe2048" 00:14:54.862 } 00:14:54.862 } 00:14:54.862 ]' 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.862 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.121 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:55.121 17:39:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:14:55.685 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.942 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:55.942 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.942 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.942 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.942 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.942 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:55.942 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.200 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.458 00:14:56.458 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.458 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.458 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.716 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.716 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.716 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.716 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.716 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.716 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.716 { 00:14:56.716 "cntlid": 111, 00:14:56.716 "qid": 0, 00:14:56.716 "state": "enabled", 00:14:56.716 "thread": "nvmf_tgt_poll_group_000", 00:14:56.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:56.716 "listen_address": { 00:14:56.716 "trtype": "RDMA", 00:14:56.716 "adrfam": "IPv4", 00:14:56.716 "traddr": "192.168.100.8", 00:14:56.716 "trsvcid": "4420" 00:14:56.716 }, 00:14:56.716 "peer_address": { 00:14:56.716 "trtype": "RDMA", 00:14:56.716 "adrfam": "IPv4", 00:14:56.716 "traddr": "192.168.100.8", 00:14:56.716 "trsvcid": "32886" 00:14:56.716 }, 00:14:56.716 "auth": { 00:14:56.716 "state": "completed", 00:14:56.716 "digest": "sha512", 00:14:56.716 "dhgroup": "ffdhe2048" 00:14:56.716 } 00:14:56.716 } 00:14:56.716 ]' 00:14:56.716 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.716 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:56.716 17:39:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.716 17:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:56.716 17:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.716 17:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.716 17:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.716 17:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.973 17:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:56.974 17:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:14:57.540 17:39:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.798 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:57.798 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.798 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.798 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.798 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.798 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.798 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:57.798 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:58.056 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:14:58.056 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.056 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.056 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:58.056 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:58.056 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.056 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.056 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.056 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.057 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.057 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.057 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.057 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.315 00:14:58.315 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.315 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.315 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.573 { 00:14:58.573 "cntlid": 113, 00:14:58.573 "qid": 0, 00:14:58.573 "state": "enabled", 00:14:58.573 "thread": "nvmf_tgt_poll_group_000", 00:14:58.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:14:58.573 "listen_address": { 00:14:58.573 "trtype": "RDMA", 00:14:58.573 "adrfam": "IPv4", 00:14:58.573 "traddr": "192.168.100.8", 00:14:58.573 "trsvcid": "4420" 00:14:58.573 }, 00:14:58.573 "peer_address": { 00:14:58.573 "trtype": "RDMA", 00:14:58.573 "adrfam": "IPv4", 00:14:58.573 "traddr": "192.168.100.8", 00:14:58.573 "trsvcid": "32804" 00:14:58.573 }, 00:14:58.573 "auth": { 00:14:58.573 "state": "completed", 00:14:58.573 "digest": "sha512", 00:14:58.573 "dhgroup": "ffdhe3072" 00:14:58.573 } 00:14:58.573 } 00:14:58.573 ]' 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.573 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.574 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.574 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.574 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.574 17:39:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.831 17:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:58.832 17:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:14:59.764 17:39:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.764 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:14:59.764 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.764 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.764 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.764 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.764 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:14:59.764 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.022 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.279 00:15:00.279 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.279 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.279 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.537 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.537 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.537 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.537 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.537 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.537 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.537 { 00:15:00.537 "cntlid": 115, 00:15:00.537 "qid": 0, 00:15:00.537 "state": "enabled", 00:15:00.537 "thread": "nvmf_tgt_poll_group_000", 00:15:00.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:00.537 "listen_address": { 00:15:00.538 "trtype": "RDMA", 00:15:00.538 "adrfam": "IPv4", 00:15:00.538 "traddr": "192.168.100.8", 00:15:00.538 "trsvcid": "4420" 00:15:00.538 }, 00:15:00.538 "peer_address": { 00:15:00.538 "trtype": "RDMA", 00:15:00.538 "adrfam": "IPv4", 00:15:00.538 "traddr": "192.168.100.8", 00:15:00.538 "trsvcid": "43621" 00:15:00.538 }, 00:15:00.538 "auth": { 00:15:00.538 "state": "completed", 00:15:00.538 "digest": "sha512", 00:15:00.538 "dhgroup": "ffdhe3072" 00:15:00.538 } 00:15:00.538 } 00:15:00.538 ]' 00:15:00.538 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.538 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.538 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.538 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:00.538 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.538 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.538 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.538 17:39:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.795 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:15:00.795 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:15:01.362 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.620 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:01.620 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.620 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.620 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.620 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.620 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:01.620 17:39:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.879 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.137 00:15:02.137 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.137 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.137 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.395 { 00:15:02.395 "cntlid": 117, 00:15:02.395 "qid": 0, 00:15:02.395 "state": "enabled", 00:15:02.395 "thread": "nvmf_tgt_poll_group_000", 00:15:02.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:02.395 "listen_address": { 00:15:02.395 "trtype": "RDMA", 00:15:02.395 "adrfam": "IPv4", 00:15:02.395 "traddr": "192.168.100.8", 00:15:02.395 "trsvcid": "4420" 00:15:02.395 }, 00:15:02.395 "peer_address": { 00:15:02.395 "trtype": "RDMA", 00:15:02.395 "adrfam": "IPv4", 00:15:02.395 "traddr": "192.168.100.8", 00:15:02.395 "trsvcid": "35921" 00:15:02.395 }, 00:15:02.395 "auth": { 00:15:02.395 "state": "completed", 00:15:02.395 "digest": "sha512", 00:15:02.395 "dhgroup": "ffdhe3072" 00:15:02.395 } 00:15:02.395 } 00:15:02.395 ]' 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.395 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.652 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:15:02.652 17:39:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:15:03.586 17:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.586 17:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:03.586 17:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.586 17:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.586 17:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.586 17:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.586 17:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:03.586 17:39:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.844 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.116 00:15:04.116 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.116 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.116 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:04.447 { 00:15:04.447 "cntlid": 119, 00:15:04.447 "qid": 0, 00:15:04.447 "state": "enabled", 00:15:04.447 "thread": "nvmf_tgt_poll_group_000", 00:15:04.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:04.447 "listen_address": { 00:15:04.447 "trtype": "RDMA", 00:15:04.447 "adrfam": "IPv4", 00:15:04.447 "traddr": "192.168.100.8", 00:15:04.447 "trsvcid": "4420" 00:15:04.447 }, 00:15:04.447 "peer_address": { 00:15:04.447 "trtype": "RDMA", 00:15:04.447 "adrfam": "IPv4", 00:15:04.447 "traddr": "192.168.100.8", 00:15:04.447 "trsvcid": "36618" 00:15:04.447 }, 00:15:04.447 "auth": { 00:15:04.447 "state": "completed", 00:15:04.447 "digest": "sha512", 00:15:04.447 "dhgroup": "ffdhe3072" 00:15:04.447 } 00:15:04.447 } 00:15:04.447 ]' 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.447 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.752 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:04.752 17:39:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:05.348 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.348 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:05.348 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.348 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.348 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.348 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:05.348 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:05.348 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:05.348 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:05.605 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:05.605 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:05.605 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:05.605 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:05.606 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:05.606 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.606 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.606 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.606 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.606 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.606 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.606 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.606 17:39:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:05.864 00:15:05.864 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.864 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.864 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.122 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.122 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.122 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.122 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.122 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.122 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:06.123 { 00:15:06.123 "cntlid": 121, 00:15:06.123 "qid": 0, 00:15:06.123 "state": "enabled", 00:15:06.123 "thread": "nvmf_tgt_poll_group_000", 00:15:06.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:06.123 "listen_address": { 00:15:06.123 "trtype": "RDMA", 00:15:06.123 "adrfam": "IPv4", 00:15:06.123 "traddr": "192.168.100.8", 00:15:06.123 "trsvcid": "4420" 00:15:06.123 }, 00:15:06.123 "peer_address": { 00:15:06.123 "trtype": "RDMA", 00:15:06.123 "adrfam": "IPv4", 00:15:06.123 "traddr": "192.168.100.8", 00:15:06.123 "trsvcid": "46354" 00:15:06.123 }, 00:15:06.123 "auth": { 00:15:06.123 "state": "completed", 00:15:06.123 "digest": "sha512", 00:15:06.123 "dhgroup": "ffdhe4096" 00:15:06.123 } 00:15:06.123 } 00:15:06.123 ]' 00:15:06.123 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:06.123 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:06.123 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:06.381 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:06.381 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:06.381 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.381 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.381 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.638 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:15:06.638 17:39:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:15:07.205 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.463 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.722 17:39:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:07.980 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.980 { 00:15:07.980 "cntlid": 123, 00:15:07.980 "qid": 0, 00:15:07.980 "state": "enabled", 00:15:07.980 "thread": "nvmf_tgt_poll_group_000", 00:15:07.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:07.980 "listen_address": { 00:15:07.980 "trtype": "RDMA", 00:15:07.980 "adrfam": "IPv4", 00:15:07.980 "traddr": "192.168.100.8", 00:15:07.980 "trsvcid": "4420" 00:15:07.980 }, 00:15:07.980 "peer_address": { 00:15:07.980 "trtype": "RDMA", 00:15:07.980 "adrfam": "IPv4", 00:15:07.980 "traddr": "192.168.100.8", 00:15:07.980 "trsvcid": "38402" 00:15:07.980 }, 00:15:07.980 "auth": { 00:15:07.980 "state": "completed", 00:15:07.980 "digest": "sha512", 00:15:07.980 "dhgroup": "ffdhe4096" 00:15:07.980 } 00:15:07.980 } 00:15:07.980 ]' 00:15:07.980 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.239 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:08.239 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.239 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.239 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.239 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.239 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.239 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.497 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:15:08.497 17:39:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:15:09.063 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.321 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:09.321 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.321 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.321 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.321 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.321 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:09.321 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.578 17:39:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.834 00:15:09.834 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.834 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.834 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.092 { 00:15:10.092 "cntlid": 125, 00:15:10.092 "qid": 0, 00:15:10.092 "state": "enabled", 00:15:10.092 "thread": "nvmf_tgt_poll_group_000", 00:15:10.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:10.092 "listen_address": { 00:15:10.092 "trtype": "RDMA", 00:15:10.092 "adrfam": "IPv4", 00:15:10.092 "traddr": "192.168.100.8", 00:15:10.092 "trsvcid": "4420" 00:15:10.092 }, 00:15:10.092 "peer_address": { 00:15:10.092 "trtype": "RDMA", 00:15:10.092 "adrfam": "IPv4", 00:15:10.092 "traddr": "192.168.100.8", 00:15:10.092 "trsvcid": "60228" 00:15:10.092 }, 00:15:10.092 "auth": { 00:15:10.092 "state": "completed", 00:15:10.092 "digest": "sha512", 00:15:10.092 "dhgroup": "ffdhe4096" 00:15:10.092 } 00:15:10.092 } 00:15:10.092 ]' 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.092 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.351 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:15:10.351 17:39:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:15:10.917 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.175 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:11.175 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.175 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.175 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.175 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.175 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:11.175 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.434 17:39:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.692 00:15:11.692 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.692 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.692 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.950 { 00:15:11.950 "cntlid": 127, 00:15:11.950 "qid": 0, 00:15:11.950 "state": "enabled", 00:15:11.950 "thread": "nvmf_tgt_poll_group_000", 00:15:11.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:11.950 "listen_address": { 00:15:11.950 "trtype": "RDMA", 00:15:11.950 "adrfam": "IPv4", 00:15:11.950 "traddr": "192.168.100.8", 00:15:11.950 "trsvcid": "4420" 00:15:11.950 }, 00:15:11.950 "peer_address": { 00:15:11.950 "trtype": "RDMA", 00:15:11.950 "adrfam": "IPv4", 00:15:11.950 "traddr": "192.168.100.8", 00:15:11.950 "trsvcid": "58791" 00:15:11.950 }, 00:15:11.950 "auth": { 00:15:11.950 "state": "completed", 00:15:11.950 "digest": "sha512", 00:15:11.950 "dhgroup": "ffdhe4096" 00:15:11.950 } 00:15:11.950 } 00:15:11.950 ]' 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.950 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.208 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.208 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.208 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.208 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:12.208 17:39:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:13.142 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.142 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:13.142 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.142 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.142 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.142 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:13.142 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.142 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:13.142 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.400 17:39:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.658 00:15:13.915 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.915 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.915 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.915 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.915 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.915 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.915 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.915 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.915 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.915 { 00:15:13.915 "cntlid": 129, 00:15:13.915 "qid": 0, 00:15:13.915 "state": "enabled", 00:15:13.916 "thread": "nvmf_tgt_poll_group_000", 00:15:13.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:13.916 "listen_address": { 00:15:13.916 "trtype": "RDMA", 00:15:13.916 "adrfam": "IPv4", 00:15:13.916 "traddr": "192.168.100.8", 00:15:13.916 "trsvcid": "4420" 00:15:13.916 }, 00:15:13.916 "peer_address": { 00:15:13.916 "trtype": "RDMA", 00:15:13.916 "adrfam": "IPv4", 00:15:13.916 "traddr": "192.168.100.8", 00:15:13.916 "trsvcid": "58640" 00:15:13.916 }, 00:15:13.916 "auth": { 00:15:13.916 "state": "completed", 00:15:13.916 "digest": "sha512", 00:15:13.916 "dhgroup": "ffdhe6144" 00:15:13.916 } 00:15:13.916 } 00:15:13.916 ]' 00:15:13.916 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.173 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:14.173 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.173 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:14.173 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.173 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.173 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.173 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.438 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:15:14.438 17:39:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:15:15.004 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.262 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:15.262 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.262 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.262 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.262 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.262 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:15.262 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:15.519 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:15.519 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.519 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.519 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:15.519 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:15.519 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.519 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.519 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.519 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.520 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.520 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.520 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.520 17:39:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.777 00:15:15.777 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.777 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.777 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.035 { 00:15:16.035 "cntlid": 131, 00:15:16.035 "qid": 0, 00:15:16.035 "state": "enabled", 00:15:16.035 "thread": "nvmf_tgt_poll_group_000", 00:15:16.035 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:16.035 "listen_address": { 00:15:16.035 "trtype": "RDMA", 00:15:16.035 "adrfam": "IPv4", 00:15:16.035 "traddr": "192.168.100.8", 00:15:16.035 "trsvcid": "4420" 00:15:16.035 }, 00:15:16.035 "peer_address": { 00:15:16.035 "trtype": "RDMA", 00:15:16.035 "adrfam": "IPv4", 00:15:16.035 "traddr": "192.168.100.8", 00:15:16.035 "trsvcid": "34942" 00:15:16.035 }, 00:15:16.035 "auth": { 00:15:16.035 "state": "completed", 00:15:16.035 "digest": "sha512", 00:15:16.035 "dhgroup": "ffdhe6144" 00:15:16.035 } 00:15:16.035 } 00:15:16.035 ]' 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.035 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.293 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:15:16.293 17:39:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:15:16.858 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.116 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:17.116 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.116 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.116 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.116 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.116 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:17.116 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:17.373 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:17.373 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.373 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.374 17:39:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.938 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.938 { 00:15:17.938 "cntlid": 133, 00:15:17.938 "qid": 0, 00:15:17.938 "state": "enabled", 00:15:17.938 "thread": "nvmf_tgt_poll_group_000", 00:15:17.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:17.938 "listen_address": { 00:15:17.938 "trtype": "RDMA", 00:15:17.938 "adrfam": "IPv4", 00:15:17.938 "traddr": "192.168.100.8", 00:15:17.938 "trsvcid": "4420" 00:15:17.938 }, 00:15:17.938 "peer_address": { 00:15:17.938 "trtype": "RDMA", 00:15:17.938 "adrfam": "IPv4", 00:15:17.938 "traddr": "192.168.100.8", 00:15:17.938 "trsvcid": "50876" 00:15:17.938 }, 00:15:17.938 "auth": { 00:15:17.938 "state": "completed", 00:15:17.938 "digest": "sha512", 00:15:17.938 "dhgroup": "ffdhe6144" 00:15:17.938 } 00:15:17.938 } 00:15:17.938 ]' 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.938 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.195 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.195 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.195 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.195 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.195 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.453 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:15:18.453 17:39:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:15:19.016 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.273 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.530 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.531 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:19.531 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.531 17:39:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:19.788 00:15:19.788 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:19.788 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.788 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.045 { 00:15:20.045 "cntlid": 135, 00:15:20.045 "qid": 0, 00:15:20.045 "state": "enabled", 00:15:20.045 "thread": "nvmf_tgt_poll_group_000", 00:15:20.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:20.045 "listen_address": { 00:15:20.045 "trtype": "RDMA", 00:15:20.045 "adrfam": "IPv4", 00:15:20.045 "traddr": "192.168.100.8", 00:15:20.045 "trsvcid": "4420" 00:15:20.045 }, 00:15:20.045 "peer_address": { 00:15:20.045 "trtype": "RDMA", 00:15:20.045 "adrfam": "IPv4", 00:15:20.045 "traddr": "192.168.100.8", 00:15:20.045 "trsvcid": "57285" 00:15:20.045 }, 00:15:20.045 "auth": { 00:15:20.045 "state": "completed", 00:15:20.045 "digest": "sha512", 00:15:20.045 "dhgroup": "ffdhe6144" 00:15:20.045 } 00:15:20.045 } 00:15:20.045 ]' 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.045 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.303 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:20.303 17:39:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:20.867 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.124 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:21.124 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.124 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.124 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.124 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.124 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.124 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:21.124 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.382 17:39:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:21.947 00:15:21.947 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.947 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.947 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:21.947 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.203 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.203 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.203 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.203 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.203 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.203 { 00:15:22.203 "cntlid": 137, 00:15:22.203 "qid": 0, 00:15:22.203 "state": "enabled", 00:15:22.203 "thread": "nvmf_tgt_poll_group_000", 00:15:22.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:22.203 "listen_address": { 00:15:22.203 "trtype": "RDMA", 00:15:22.204 "adrfam": "IPv4", 00:15:22.204 "traddr": "192.168.100.8", 00:15:22.204 "trsvcid": "4420" 00:15:22.204 }, 00:15:22.204 "peer_address": { 00:15:22.204 "trtype": "RDMA", 00:15:22.204 "adrfam": "IPv4", 00:15:22.204 "traddr": "192.168.100.8", 00:15:22.204 "trsvcid": "54232" 00:15:22.204 }, 00:15:22.204 "auth": { 00:15:22.204 "state": "completed", 00:15:22.204 "digest": "sha512", 00:15:22.204 "dhgroup": "ffdhe8192" 00:15:22.204 } 00:15:22.204 } 00:15:22.204 ]' 00:15:22.204 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.204 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.204 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.204 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.204 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.204 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.204 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.204 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.461 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:15:22.461 17:40:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:15:23.024 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.280 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:23.280 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.280 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.280 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.280 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.280 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:23.280 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.538 17:40:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.103 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.103 { 00:15:24.103 "cntlid": 139, 00:15:24.103 "qid": 0, 00:15:24.103 "state": "enabled", 00:15:24.103 "thread": "nvmf_tgt_poll_group_000", 00:15:24.103 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:24.103 "listen_address": { 00:15:24.103 "trtype": "RDMA", 00:15:24.103 "adrfam": "IPv4", 00:15:24.103 "traddr": "192.168.100.8", 00:15:24.103 "trsvcid": "4420" 00:15:24.103 }, 00:15:24.103 "peer_address": { 00:15:24.103 "trtype": "RDMA", 00:15:24.103 "adrfam": "IPv4", 00:15:24.103 "traddr": "192.168.100.8", 00:15:24.103 "trsvcid": "41504" 00:15:24.103 }, 00:15:24.103 "auth": { 00:15:24.103 "state": "completed", 00:15:24.103 "digest": "sha512", 00:15:24.103 "dhgroup": "ffdhe8192" 00:15:24.103 } 00:15:24.103 } 00:15:24.103 ]' 00:15:24.103 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.360 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.360 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.360 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.360 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.360 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.360 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.360 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.617 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:15:24.617 17:40:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: --dhchap-ctrl-secret DHHC-1:02:MGFkY2RmYmRlYmUxMGM4YjAwMDVhNDBiMGU1NjFmMzRiZjQyNmQyYzMzY2U3ZmU075QebA==: 00:15:25.182 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.439 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:25.439 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.439 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.439 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.439 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.439 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:25.439 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.697 17:40:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.261 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.261 { 00:15:26.261 "cntlid": 141, 00:15:26.261 "qid": 0, 00:15:26.261 "state": "enabled", 00:15:26.261 "thread": "nvmf_tgt_poll_group_000", 00:15:26.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:26.261 "listen_address": { 00:15:26.261 "trtype": "RDMA", 00:15:26.261 "adrfam": "IPv4", 00:15:26.261 "traddr": "192.168.100.8", 00:15:26.261 "trsvcid": "4420" 00:15:26.261 }, 00:15:26.261 "peer_address": { 00:15:26.261 "trtype": "RDMA", 00:15:26.261 "adrfam": "IPv4", 00:15:26.261 "traddr": "192.168.100.8", 00:15:26.261 "trsvcid": "54652" 00:15:26.261 }, 00:15:26.261 "auth": { 00:15:26.261 "state": "completed", 00:15:26.261 "digest": "sha512", 00:15:26.261 "dhgroup": "ffdhe8192" 00:15:26.261 } 00:15:26.261 } 00:15:26.261 ]' 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.261 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.262 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.519 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.519 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.519 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.519 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.519 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.519 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:15:26.519 17:40:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:01:OTA4NmUyZGUxOWVhMGM2MzY5MTg3MzEzZDc4N2M1M2Y//K8Z: 00:15:27.448 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.448 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:27.448 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.448 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.448 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.448 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.448 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:27.449 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:27.705 17:40:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.269 00:15:28.269 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.269 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.269 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.269 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.269 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.269 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.269 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.525 { 00:15:28.525 "cntlid": 143, 00:15:28.525 "qid": 0, 00:15:28.525 "state": "enabled", 00:15:28.525 "thread": "nvmf_tgt_poll_group_000", 00:15:28.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:28.525 "listen_address": { 00:15:28.525 "trtype": "RDMA", 00:15:28.525 "adrfam": "IPv4", 00:15:28.525 "traddr": "192.168.100.8", 00:15:28.525 "trsvcid": "4420" 00:15:28.525 }, 00:15:28.525 "peer_address": { 00:15:28.525 "trtype": "RDMA", 00:15:28.525 "adrfam": "IPv4", 00:15:28.525 "traddr": "192.168.100.8", 00:15:28.525 "trsvcid": "46445" 00:15:28.525 }, 00:15:28.525 "auth": { 00:15:28.525 "state": "completed", 00:15:28.525 "digest": "sha512", 00:15:28.525 "dhgroup": "ffdhe8192" 00:15:28.525 } 00:15:28.525 } 00:15:28.525 ]' 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.525 17:40:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:28.781 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:28.781 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:29.346 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:29.602 17:40:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.859 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.422 00:15:30.422 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.422 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.422 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.422 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.422 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.422 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.422 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.422 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.422 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.422 { 00:15:30.423 "cntlid": 145, 00:15:30.423 "qid": 0, 00:15:30.423 "state": "enabled", 00:15:30.423 "thread": "nvmf_tgt_poll_group_000", 00:15:30.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:30.423 "listen_address": { 00:15:30.423 "trtype": "RDMA", 00:15:30.423 "adrfam": "IPv4", 00:15:30.423 "traddr": "192.168.100.8", 00:15:30.423 "trsvcid": "4420" 00:15:30.423 }, 00:15:30.423 "peer_address": { 00:15:30.423 "trtype": "RDMA", 00:15:30.423 "adrfam": "IPv4", 00:15:30.423 "traddr": "192.168.100.8", 00:15:30.423 "trsvcid": "45039" 00:15:30.423 }, 00:15:30.423 "auth": { 00:15:30.423 "state": "completed", 00:15:30.423 "digest": "sha512", 00:15:30.423 "dhgroup": "ffdhe8192" 00:15:30.423 } 00:15:30.423 } 00:15:30.423 ]' 00:15:30.423 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.679 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:30.679 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.679 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:30.679 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.679 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.679 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.679 17:40:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.936 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:15:30.936 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:00:ODZjNGNiOGM3MjhjZGE5NTA2Y2RlNDEzNDlhNDkwZjhiMjA3YWJiNmQ2M2FkMjdmPuungQ==: --dhchap-ctrl-secret DHHC-1:03:Mjk0N2Q5NjE0YmJmODUwNGFkMmQ0NDNmNGJmOTRjOTM1OGQzNjQwOTY3YzRjN2U1OGU2ZmVlZjAxZjhjYmJlYqC7adY=: 00:15:31.500 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:31.757 17:40:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:32.319 request: 00:15:32.319 { 00:15:32.319 "name": "nvme0", 00:15:32.319 "trtype": "rdma", 00:15:32.319 "traddr": "192.168.100.8", 00:15:32.319 "adrfam": "ipv4", 00:15:32.319 "trsvcid": "4420", 00:15:32.319 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:32.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:32.319 "prchk_reftag": false, 00:15:32.319 "prchk_guard": false, 00:15:32.319 "hdgst": false, 00:15:32.319 "ddgst": false, 00:15:32.319 "dhchap_key": "key2", 00:15:32.319 "allow_unrecognized_csi": false, 00:15:32.319 "method": "bdev_nvme_attach_controller", 00:15:32.319 "req_id": 1 00:15:32.319 } 00:15:32.319 Got JSON-RPC error response 00:15:32.319 response: 00:15:32.319 { 00:15:32.319 "code": -5, 00:15:32.319 "message": "Input/output error" 00:15:32.319 } 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:32.319 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:32.882 request: 00:15:32.882 { 00:15:32.882 "name": "nvme0", 00:15:32.882 "trtype": "rdma", 00:15:32.882 "traddr": "192.168.100.8", 00:15:32.882 "adrfam": "ipv4", 00:15:32.882 "trsvcid": "4420", 00:15:32.882 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:32.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:32.882 "prchk_reftag": false, 00:15:32.882 "prchk_guard": false, 00:15:32.882 "hdgst": false, 00:15:32.882 "ddgst": false, 00:15:32.882 "dhchap_key": "key1", 00:15:32.882 "dhchap_ctrlr_key": "ckey2", 00:15:32.882 "allow_unrecognized_csi": false, 00:15:32.882 "method": "bdev_nvme_attach_controller", 00:15:32.882 "req_id": 1 00:15:32.882 } 00:15:32.882 Got JSON-RPC error response 00:15:32.882 response: 00:15:32.882 { 00:15:32.882 "code": -5, 00:15:32.882 "message": "Input/output error" 00:15:32.882 } 00:15:32.882 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:32.882 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.882 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.882 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.882 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:32.882 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.882 17:40:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.882 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.139 request: 00:15:33.139 { 00:15:33.139 "name": "nvme0", 00:15:33.139 "trtype": "rdma", 00:15:33.139 "traddr": "192.168.100.8", 00:15:33.139 "adrfam": "ipv4", 00:15:33.139 "trsvcid": "4420", 00:15:33.139 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:33.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:33.139 "prchk_reftag": false, 00:15:33.139 "prchk_guard": false, 00:15:33.139 "hdgst": false, 00:15:33.139 "ddgst": false, 00:15:33.139 "dhchap_key": "key1", 00:15:33.139 "dhchap_ctrlr_key": "ckey1", 00:15:33.139 "allow_unrecognized_csi": false, 00:15:33.139 "method": "bdev_nvme_attach_controller", 00:15:33.139 "req_id": 1 00:15:33.139 } 00:15:33.139 Got JSON-RPC error response 00:15:33.139 response: 00:15:33.139 { 00:15:33.139 "code": -5, 00:15:33.139 "message": "Input/output error" 00:15:33.139 } 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 614617 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 614617 ']' 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 614617 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.139 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 614617 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 614617' 00:15:33.396 killing process with pid 614617 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 614617 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 614617 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=635705 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 635705 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 635705 ']' 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.396 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.653 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.653 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:33.653 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:33.653 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:33.653 17:40:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.653 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.653 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:33.653 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 635705 00:15:33.653 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 635705 ']' 00:15:33.653 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.653 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.653 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.653 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.653 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.910 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.910 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:33.910 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:33.910 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.910 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.910 null0 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Nr9 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Unh ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Unh 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.4v9 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.Ba4 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ba4 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HM6 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.MXR ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MXR 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.PD3 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.168 17:40:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:35.099 nvme0n1 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.099 { 00:15:35.099 "cntlid": 1, 00:15:35.099 "qid": 0, 00:15:35.099 "state": "enabled", 00:15:35.099 "thread": "nvmf_tgt_poll_group_000", 00:15:35.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:35.099 "listen_address": { 00:15:35.099 "trtype": "RDMA", 00:15:35.099 "adrfam": "IPv4", 00:15:35.099 "traddr": "192.168.100.8", 00:15:35.099 "trsvcid": "4420" 00:15:35.099 }, 00:15:35.099 "peer_address": { 00:15:35.099 "trtype": "RDMA", 00:15:35.099 "adrfam": "IPv4", 00:15:35.099 "traddr": "192.168.100.8", 00:15:35.099 "trsvcid": "36840" 00:15:35.099 }, 00:15:35.099 "auth": { 00:15:35.099 "state": "completed", 00:15:35.099 "digest": "sha512", 00:15:35.099 "dhgroup": "ffdhe8192" 00:15:35.099 } 00:15:35.099 } 00:15:35.099 ]' 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:35.099 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.356 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.356 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.356 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.356 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:35.356 17:40:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key3 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:15:36.285 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.542 17:40:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:36.799 request: 00:15:36.799 { 00:15:36.799 "name": "nvme0", 00:15:36.799 "trtype": "rdma", 00:15:36.799 "traddr": "192.168.100.8", 00:15:36.799 "adrfam": "ipv4", 00:15:36.799 "trsvcid": "4420", 00:15:36.799 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:36.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:36.799 "prchk_reftag": false, 00:15:36.799 "prchk_guard": false, 00:15:36.799 "hdgst": false, 00:15:36.799 "ddgst": false, 00:15:36.799 "dhchap_key": "key3", 00:15:36.799 "allow_unrecognized_csi": false, 00:15:36.799 "method": "bdev_nvme_attach_controller", 00:15:36.799 "req_id": 1 00:15:36.799 } 00:15:36.799 Got JSON-RPC error response 00:15:36.799 response: 00:15:36.799 { 00:15:36.799 "code": -5, 00:15:36.799 "message": "Input/output error" 00:15:36.799 } 00:15:36.799 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:36.799 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:36.799 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:36.799 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:36.799 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:15:36.799 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:15:36.799 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:36.799 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:15:37.056 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:15:37.056 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:37.056 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:15:37.056 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:37.056 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.056 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:37.056 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.056 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:37.056 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.057 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:37.314 request: 00:15:37.314 { 00:15:37.314 "name": "nvme0", 00:15:37.314 "trtype": "rdma", 00:15:37.314 "traddr": "192.168.100.8", 00:15:37.314 "adrfam": "ipv4", 00:15:37.314 "trsvcid": "4420", 00:15:37.314 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:37.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:37.314 "prchk_reftag": false, 00:15:37.314 "prchk_guard": false, 00:15:37.314 "hdgst": false, 00:15:37.314 "ddgst": false, 00:15:37.314 "dhchap_key": "key3", 00:15:37.314 "allow_unrecognized_csi": false, 00:15:37.314 "method": "bdev_nvme_attach_controller", 00:15:37.314 "req_id": 1 00:15:37.314 } 00:15:37.314 Got JSON-RPC error response 00:15:37.314 response: 00:15:37.314 { 00:15:37.314 "code": -5, 00:15:37.314 "message": "Input/output error" 00:15:37.314 } 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:37.314 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:37.571 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:37.572 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:37.572 17:40:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:37.828 request: 00:15:37.828 { 00:15:37.828 "name": "nvme0", 00:15:37.828 "trtype": "rdma", 00:15:37.828 "traddr": "192.168.100.8", 00:15:37.828 "adrfam": "ipv4", 00:15:37.828 "trsvcid": "4420", 00:15:37.828 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:37.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:37.828 "prchk_reftag": false, 00:15:37.828 "prchk_guard": false, 00:15:37.828 "hdgst": false, 00:15:37.829 "ddgst": false, 00:15:37.829 "dhchap_key": "key0", 00:15:37.829 "dhchap_ctrlr_key": "key1", 00:15:37.829 "allow_unrecognized_csi": false, 00:15:37.829 "method": "bdev_nvme_attach_controller", 00:15:37.829 "req_id": 1 00:15:37.829 } 00:15:37.829 Got JSON-RPC error response 00:15:37.829 response: 00:15:37.829 { 00:15:37.829 "code": -5, 00:15:37.829 "message": "Input/output error" 00:15:37.829 } 00:15:37.829 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:37.829 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:37.829 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:37.829 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:37.829 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:15:37.829 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:37.829 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:15:38.085 nvme0n1 00:15:38.085 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:15:38.085 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:15:38.085 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.342 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.342 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.342 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.599 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 00:15:38.599 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.599 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.599 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.599 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:38.600 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:38.600 17:40:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:39.162 nvme0n1 00:15:39.162 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:15:39.162 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:15:39.162 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.419 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.419 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:39.419 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.419 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.419 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.419 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:15:39.419 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.419 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:15:39.676 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.676 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:39.676 17:40:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid 800e967b-538f-e911-906e-001635649f5c -l 0 --dhchap-secret DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: --dhchap-ctrl-secret DHHC-1:03:NTU0ODI3NDEyZDA4MWRlMTE4YmQzZDgwYTIyNjU2MzMyMzEwNjljMDlmNTY5ZTU0NjY0M2NiNmEwNDNmZjYwZEyFQ/A=: 00:15:40.241 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:15:40.241 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:15:40.241 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:15:40.241 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:15:40.241 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:15:40.241 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:15:40.241 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:15:40.241 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.241 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:40.498 17:40:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:15:41.061 request: 00:15:41.061 { 00:15:41.061 "name": "nvme0", 00:15:41.061 "trtype": "rdma", 00:15:41.061 "traddr": "192.168.100.8", 00:15:41.061 "adrfam": "ipv4", 00:15:41.061 "trsvcid": "4420", 00:15:41.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:41.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c", 00:15:41.061 "prchk_reftag": false, 00:15:41.061 "prchk_guard": false, 00:15:41.061 "hdgst": false, 00:15:41.061 "ddgst": false, 00:15:41.061 "dhchap_key": "key1", 00:15:41.061 "allow_unrecognized_csi": false, 00:15:41.061 "method": "bdev_nvme_attach_controller", 00:15:41.061 "req_id": 1 00:15:41.061 } 00:15:41.061 Got JSON-RPC error response 00:15:41.061 response: 00:15:41.061 { 00:15:41.061 "code": -5, 00:15:41.061 "message": "Input/output error" 00:15:41.061 } 00:15:41.061 17:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:41.061 17:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:41.061 17:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:41.061 17:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:41.061 17:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.061 17:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.061 17:40:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:41.626 nvme0n1 00:15:41.946 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:15:41.946 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:15:41.946 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.946 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.946 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.946 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.228 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:42.228 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.228 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.228 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.228 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:15:42.228 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:42.228 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:15:42.494 nvme0n1 00:15:42.494 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:15:42.494 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:15:42.494 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.758 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.758 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.758 17:40:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: '' 2s 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: ]] 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmJiNzFiZTE3MDIyYmVmZTc3ODhmMjkxNDhkMDM0NDHEnwQh: 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:43.015 17:40:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:44.911 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:15:44.911 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:15:44.911 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:15:44.911 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:15:44.911 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:15:44.911 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:15:44.911 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key1 --dhchap-ctrlr-key key2 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: 2s 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: ]] 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NGM3NzJkM2YzM2U3NTc2NjUzMTEzYzgzY2JiYmFlYTkxMDZmMjEzOTk4Mjc3ZGQ1tYoKsQ==: 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:15:44.912 17:40:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:47.436 17:40:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:48.000 nvme0n1 00:15:48.000 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:48.000 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.000 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.000 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.000 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:48.000 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:48.564 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:15:48.564 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:15:48.564 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.821 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.821 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:48.821 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.821 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.821 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.821 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:15:48.821 17:40:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:15:48.821 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:15:48.821 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:15:48.821 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:49.079 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:15:49.641 request: 00:15:49.641 { 00:15:49.641 "name": "nvme0", 00:15:49.641 "dhchap_key": "key1", 00:15:49.641 "dhchap_ctrlr_key": "key3", 00:15:49.641 "method": "bdev_nvme_set_keys", 00:15:49.641 "req_id": 1 00:15:49.641 } 00:15:49.641 Got JSON-RPC error response 00:15:49.641 response: 00:15:49.641 { 00:15:49.641 "code": -13, 00:15:49.641 "message": "Permission denied" 00:15:49.641 } 00:15:49.641 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:49.641 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:49.641 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:49.641 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:49.641 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:49.641 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:49.641 17:40:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.641 17:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:15:49.641 17:40:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key0 --dhchap-ctrlr-key key1 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:51.010 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:15:51.574 nvme0n1 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --dhchap-key key2 --dhchap-ctrlr-key key3 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:51.831 17:40:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:15:52.088 request: 00:15:52.088 { 00:15:52.088 "name": "nvme0", 00:15:52.088 "dhchap_key": "key2", 00:15:52.088 "dhchap_ctrlr_key": "key0", 00:15:52.088 "method": "bdev_nvme_set_keys", 00:15:52.088 "req_id": 1 00:15:52.088 } 00:15:52.088 Got JSON-RPC error response 00:15:52.088 response: 00:15:52.088 { 00:15:52.088 "code": -13, 00:15:52.088 "message": "Permission denied" 00:15:52.088 } 00:15:52.088 17:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:15:52.088 17:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:52.088 17:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:52.088 17:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:52.088 17:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:52.088 17:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:52.088 17:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.345 17:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:15:52.345 17:40:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 614773 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 614773 ']' 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 614773 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 614773 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 614773' 00:15:53.716 killing process with pid 614773 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 614773 00:15:53.716 17:40:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 614773 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:15:53.974 rmmod nvme_rdma 00:15:53.974 rmmod nvme_fabrics 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 635705 ']' 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 635705 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 635705 ']' 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 635705 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 635705 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 635705' 00:15:53.974 killing process with pid 635705 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 635705 00:15:53.974 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 635705 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Nr9 /tmp/spdk.key-sha256.4v9 /tmp/spdk.key-sha384.HM6 /tmp/spdk.key-sha512.PD3 /tmp/spdk.key-sha512.Unh /tmp/spdk.key-sha384.Ba4 /tmp/spdk.key-sha256.MXR '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:15:54.231 00:15:54.231 real 2m54.723s 00:15:54.231 user 6m40.279s 00:15:54.231 sys 0m25.220s 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.231 ************************************ 00:15:54.231 END TEST nvmf_auth_target 00:15:54.231 ************************************ 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' rdma = tcp ']' 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@60 -- # [[ rdma == \r\d\m\a ]] 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.231 17:40:32 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.490 ************************************ 00:15:54.490 START TEST nvmf_srq_overwhelm 00:15:54.490 ************************************ 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:15:54.490 * Looking for test storage... 00:15:54.490 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lcov --version 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@344 -- # case "$op" in 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@345 -- # : 1 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # decimal 1 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=1 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 1 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # decimal 2 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@353 -- # local d=2 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@355 -- # echo 2 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@368 -- # return 0 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:54.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.490 --rc genhtml_branch_coverage=1 00:15:54.490 --rc genhtml_function_coverage=1 00:15:54.490 --rc genhtml_legend=1 00:15:54.490 --rc geninfo_all_blocks=1 00:15:54.490 --rc geninfo_unexecuted_blocks=1 00:15:54.490 00:15:54.490 ' 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:54.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.490 --rc genhtml_branch_coverage=1 00:15:54.490 --rc genhtml_function_coverage=1 00:15:54.490 --rc genhtml_legend=1 00:15:54.490 --rc geninfo_all_blocks=1 00:15:54.490 --rc geninfo_unexecuted_blocks=1 00:15:54.490 00:15:54.490 ' 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:54.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.490 --rc genhtml_branch_coverage=1 00:15:54.490 --rc genhtml_function_coverage=1 00:15:54.490 --rc genhtml_legend=1 00:15:54.490 --rc geninfo_all_blocks=1 00:15:54.490 --rc geninfo_unexecuted_blocks=1 00:15:54.490 00:15:54.490 ' 00:15:54.490 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:54.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.490 --rc genhtml_branch_coverage=1 00:15:54.490 --rc genhtml_function_coverage=1 00:15:54.491 --rc genhtml_legend=1 00:15:54.491 --rc geninfo_all_blocks=1 00:15:54.491 --rc geninfo_unexecuted_blocks=1 00:15:54.491 00:15:54.491 ' 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # : 0 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.491 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@309 -- # xtrace_disable 00:15:54.491 17:40:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # pci_devs=() 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # net_devs=() 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # e810=() 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # local -ga e810 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # x722=() 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # local -ga x722 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # mlx=() 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # local -ga mlx 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:16:01.053 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:16:01.053 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:16:01.053 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:01.054 Found net devices under 0000:18:00.0: mlx_0_0 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:01.054 Found net devices under 0000:18:00.1: mlx_0_1 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@440 -- # is_hw=yes 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # rdma_device_init 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # uname 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@528 -- # allocate_nic_ips 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:01.054 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:01.054 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:16:01.054 altname enp24s0f0np0 00:16:01.054 altname ens785f0np0 00:16:01.054 inet 192.168.100.8/24 scope global mlx_0_0 00:16:01.054 valid_lft forever preferred_lft forever 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:01.054 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:01.054 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:16:01.054 altname enp24s0f1np1 00:16:01.054 altname ens785f1np1 00:16:01.054 inet 192.168.100.9/24 scope global mlx_0_1 00:16:01.054 valid_lft forever preferred_lft forever 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # return 0 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:01.054 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@109 -- # continue 2 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:16:01.055 192.168.100.9' 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:16:01.055 192.168.100.9' 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # head -n 1 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:16:01.055 192.168.100.9' 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # tail -n +2 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # head -n 1 00:16:01.055 17:40:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@507 -- # nvmfpid=641416 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@508 -- # waitforlisten 641416 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 641416 ']' 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.055 [2024-10-17 17:40:39.091002] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:16:01.055 [2024-10-17 17:40:39.091064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.055 [2024-10-17 17:40:39.166159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.055 [2024-10-17 17:40:39.212361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.055 [2024-10-17 17:40:39.212399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.055 [2024-10-17 17:40:39.212410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.055 [2024-10-17 17:40:39.212423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.055 [2024-10-17 17:40:39.212431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.055 [2024-10-17 17:40:39.213610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.055 [2024-10-17 17:40:39.213638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.055 [2024-10-17 17:40:39.213714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.055 [2024-10-17 17:40:39.213716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.055 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.055 [2024-10-17 17:40:39.393426] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1d802c0/0x1d847b0) succeed. 00:16:01.055 [2024-10-17 17:40:39.403941] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1d81950/0x1dc5e50) succeed. 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.312 Malloc0 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:01.312 [2024-10-17 17:40:39.514629] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.312 17:40:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:03.208 Malloc1 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.208 17:40:41 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:04.580 Malloc2 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.580 17:40:42 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.477 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:06.478 Malloc3 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.478 17:40:44 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:16:07.850 17:40:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:16:07.850 17:40:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:16:07.850 17:40:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:07.850 17:40:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:16:07.850 17:40:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:07.850 17:40:45 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:07.850 Malloc4 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.850 17:40:46 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:16:09.221 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:16:09.221 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:16:09.478 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:09.478 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:16:09.478 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:09.478 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:16:09.478 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:16:09.478 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:16:09.478 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:16:09.478 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.478 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:09.479 Malloc5 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.479 17:40:47 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:16:11.377 17:40:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:16:11.377 17:40:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:16:11.377 17:40:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:16:11.377 17:40:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:16:11.377 17:40:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:16:11.377 17:40:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:16:11.377 17:40:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:16:11.377 17:40:49 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:16:11.377 [global] 00:16:11.377 thread=1 00:16:11.377 invalidate=1 00:16:11.377 rw=read 00:16:11.377 time_based=1 00:16:11.377 runtime=10 00:16:11.377 ioengine=libaio 00:16:11.377 direct=1 00:16:11.377 bs=1048576 00:16:11.377 iodepth=128 00:16:11.377 norandommap=1 00:16:11.377 numjobs=13 00:16:11.377 00:16:11.377 [job0] 00:16:11.377 filename=/dev/nvme0n1 00:16:11.377 [job1] 00:16:11.377 filename=/dev/nvme1n1 00:16:11.377 [job2] 00:16:11.377 filename=/dev/nvme2n1 00:16:11.377 [job3] 00:16:11.377 filename=/dev/nvme3n1 00:16:11.377 [job4] 00:16:11.377 filename=/dev/nvme4n1 00:16:11.377 [job5] 00:16:11.377 filename=/dev/nvme5n1 00:16:11.377 Could not set queue depth (nvme0n1) 00:16:11.377 Could not set queue depth (nvme1n1) 00:16:11.377 Could not set queue depth (nvme2n1) 00:16:11.377 Could not set queue depth (nvme3n1) 00:16:11.377 Could not set queue depth (nvme4n1) 00:16:11.377 Could not set queue depth (nvme5n1) 00:16:11.377 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:11.377 ... 00:16:11.377 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:11.377 ... 00:16:11.377 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:11.377 ... 00:16:11.377 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:11.377 ... 00:16:11.377 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:11.377 ... 00:16:11.377 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:16:11.377 ... 00:16:11.377 fio-3.35 00:16:11.377 Starting 78 threads 00:16:26.259 00:16:26.259 job0: (groupid=0, jobs=1): err= 0: pid=643045: Thu Oct 17 17:41:02 2024 00:16:26.259 read: IOPS=3, BW=3734KiB/s (3824kB/s)(45.0MiB/12341msec) 00:16:26.259 slat (usec): min=786, max=3709.5k, avg=227793.77, stdev=753824.97 00:16:26.259 clat (msec): min=2089, max=12338, avg=11487.74, stdev=2403.26 00:16:26.259 lat (msec): min=4214, max=12340, avg=11715.53, stdev=1931.80 00:16:26.259 clat percentiles (msec): 00:16:26.259 | 1.00th=[ 2089], 5.00th=[ 4279], 10.00th=[ 8490], 20.00th=[12281], 00:16:26.259 | 30.00th=[12281], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:16:26.259 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.259 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.259 | 99.99th=[12281] 00:16:26.259 lat (msec) : >=2000=100.00% 00:16:26.259 cpu : usr=0.00%, sys=0.41%, ctx=47, majf=0, minf=11521 00:16:26.259 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:16:26.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.259 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.259 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.259 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.259 job0: (groupid=0, jobs=1): err= 0: pid=643046: Thu Oct 17 17:41:02 2024 00:16:26.260 read: IOPS=6, BW=6301KiB/s (6452kB/s)(76.0MiB/12351msec) 00:16:26.260 slat (usec): min=973, max=2129.6k, avg=134304.60, stdev=492288.97 00:16:26.260 clat (msec): min=2143, max=12348, avg=10765.35, stdev=2897.02 00:16:26.260 lat (msec): min=4189, max=12350, avg=10899.66, stdev=2723.36 00:16:26.260 clat percentiles (msec): 00:16:26.260 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 8490], 00:16:26.260 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:16:26.260 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.260 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:16:26.260 | 99.99th=[12416] 00:16:26.260 lat (msec) : >=2000=100.00% 00:16:26.260 cpu : usr=0.00%, sys=0.69%, ctx=78, majf=0, minf=19457 00:16:26.260 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.5%, 16=21.1%, 32=42.1%, >=64=17.1% 00:16:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:26.260 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.260 job0: (groupid=0, jobs=1): err= 0: pid=643047: Thu Oct 17 17:41:02 2024 00:16:26.260 read: IOPS=9, BW=9.81MiB/s (10.3MB/s)(120MiB/12237msec) 00:16:26.260 slat (usec): min=434, max=4288.6k, avg=84331.12, stdev=476035.17 00:16:26.260 clat (msec): min=2115, max=12216, avg=6120.19, stdev=3494.32 00:16:26.260 lat (msec): min=3877, max=12235, avg=6204.52, stdev=3518.91 00:16:26.260 clat percentiles (msec): 00:16:26.260 | 1.00th=[ 3876], 5.00th=[ 3910], 10.00th=[ 3943], 20.00th=[ 3977], 00:16:26.260 | 30.00th=[ 4044], 40.00th=[ 4077], 50.00th=[ 4144], 60.00th=[ 4178], 00:16:26.260 | 70.00th=[ 4245], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:16:26.260 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.260 | 99.99th=[12281] 00:16:26.260 lat (msec) : >=2000=100.00% 00:16:26.260 cpu : usr=0.00%, sys=0.88%, ctx=75, majf=0, minf=30721 00:16:26.260 IO depths : 1=0.8%, 2=1.7%, 4=3.3%, 8=6.7%, 16=13.3%, 32=26.7%, >=64=47.5% 00:16:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:26.260 issued rwts: total=120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.260 job0: (groupid=0, jobs=1): err= 0: pid=643048: Thu Oct 17 17:41:02 2024 00:16:26.260 read: IOPS=5, BW=5629KiB/s (5764kB/s)(68.0MiB/12370msec) 00:16:26.260 slat (usec): min=445, max=4243.0k, avg=150762.19, stdev=637560.91 00:16:26.260 clat (msec): min=2116, max=12367, avg=11383.12, stdev=2399.36 00:16:26.260 lat (msec): min=4209, max=12368, avg=11533.88, stdev=2113.49 00:16:26.260 clat percentiles (msec): 00:16:26.260 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[12147], 00:16:26.260 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:16:26.260 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:16:26.260 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:16:26.260 | 99.99th=[12416] 00:16:26.260 lat (msec) : >=2000=100.00% 00:16:26.260 cpu : usr=0.00%, sys=0.56%, ctx=80, majf=0, minf=17409 00:16:26.260 IO depths : 1=1.5%, 2=2.9%, 4=5.9%, 8=11.8%, 16=23.5%, 32=47.1%, >=64=7.4% 00:16:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:26.260 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.260 job0: (groupid=0, jobs=1): err= 0: pid=643049: Thu Oct 17 17:41:02 2024 00:16:26.260 read: IOPS=4, BW=4419KiB/s (4525kB/s)(53.0MiB/12281msec) 00:16:26.260 slat (usec): min=951, max=2096.3k, avg=191670.15, stdev=578052.70 00:16:26.260 clat (msec): min=2121, max=12277, avg=9376.32, stdev=3383.82 00:16:26.260 lat (msec): min=4181, max=12279, avg=9567.99, stdev=3250.03 00:16:26.260 clat percentiles (msec): 00:16:26.260 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:26.260 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[12147], 60.00th=[12147], 00:16:26.260 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.260 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.260 | 99.99th=[12281] 00:16:26.260 lat (msec) : >=2000=100.00% 00:16:26.260 cpu : usr=0.00%, sys=0.48%, ctx=54, majf=0, minf=13569 00:16:26.260 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:16:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.260 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.260 job0: (groupid=0, jobs=1): err= 0: pid=643050: Thu Oct 17 17:41:02 2024 00:16:26.260 read: IOPS=6, BW=6384KiB/s (6537kB/s)(77.0MiB/12351msec) 00:16:26.260 slat (usec): min=955, max=3519.4k, avg=132799.56, stdev=564073.17 00:16:26.260 clat (msec): min=2124, max=12347, avg=10707.82, stdev=2832.05 00:16:26.260 lat (msec): min=4201, max=12350, avg=10840.62, stdev=2658.69 00:16:26.260 clat percentiles (msec): 00:16:26.260 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:16:26.260 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:16:26.260 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.260 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.260 | 99.99th=[12281] 00:16:26.260 lat (msec) : >=2000=100.00% 00:16:26.260 cpu : usr=0.02%, sys=0.66%, ctx=71, majf=0, minf=19713 00:16:26.260 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.4%, 16=20.8%, 32=41.6%, >=64=18.2% 00:16:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:26.260 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.260 job0: (groupid=0, jobs=1): err= 0: pid=643051: Thu Oct 17 17:41:02 2024 00:16:26.260 read: IOPS=2, BW=2917KiB/s (2987kB/s)(35.0MiB/12285msec) 00:16:26.260 slat (usec): min=1074, max=4277.4k, avg=291349.02, stdev=981176.41 00:16:26.260 clat (msec): min=2087, max=12283, avg=11150.13, stdev=2806.59 00:16:26.260 lat (msec): min=4227, max=12284, avg=11441.48, stdev=2326.30 00:16:26.260 clat percentiles (msec): 00:16:26.260 | 1.00th=[ 2089], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[12147], 00:16:26.260 | 30.00th=[12147], 40.00th=[12281], 50.00th=[12281], 60.00th=[12281], 00:16:26.260 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.260 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.260 | 99.99th=[12281] 00:16:26.260 lat (msec) : >=2000=100.00% 00:16:26.260 cpu : usr=0.02%, sys=0.30%, ctx=33, majf=0, minf=8961 00:16:26.260 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:16:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.260 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.260 job0: (groupid=0, jobs=1): err= 0: pid=643052: Thu Oct 17 17:41:02 2024 00:16:26.260 read: IOPS=2, BW=2346KiB/s (2402kB/s)(28.0MiB/12224msec) 00:16:26.260 slat (usec): min=930, max=2134.6k, avg=361541.48, stdev=776943.79 00:16:26.260 clat (msec): min=2099, max=12222, avg=9549.93, stdev=3479.81 00:16:26.260 lat (msec): min=4195, max=12223, avg=9911.47, stdev=3191.02 00:16:26.260 clat percentiles (msec): 00:16:26.260 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4245], 00:16:26.260 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12147], 60.00th=[12147], 00:16:26.260 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12281], 95.00th=[12281], 00:16:26.260 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.260 | 99.99th=[12281] 00:16:26.260 lat (msec) : >=2000=100.00% 00:16:26.260 cpu : usr=0.01%, sys=0.23%, ctx=33, majf=0, minf=7169 00:16:26.260 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:16:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.260 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.260 job0: (groupid=0, jobs=1): err= 0: pid=643053: Thu Oct 17 17:41:02 2024 00:16:26.260 read: IOPS=2, BW=2585KiB/s (2647kB/s)(31.0MiB/12282msec) 00:16:26.260 slat (usec): min=666, max=2112.1k, avg=326845.81, stdev=735325.52 00:16:26.260 clat (msec): min=2149, max=12231, avg=9634.37, stdev=3443.69 00:16:26.260 lat (msec): min=4223, max=12281, avg=9961.22, stdev=3180.34 00:16:26.260 clat percentiles (msec): 00:16:26.260 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:26.260 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12147], 60.00th=[12147], 00:16:26.260 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.260 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.260 | 99.99th=[12281] 00:16:26.260 lat (msec) : >=2000=100.00% 00:16:26.260 cpu : usr=0.00%, sys=0.20%, ctx=41, majf=0, minf=7937 00:16:26.260 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:16:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.260 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.260 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.260 job0: (groupid=0, jobs=1): err= 0: pid=643054: Thu Oct 17 17:41:02 2024 00:16:26.260 read: IOPS=3, BW=3398KiB/s (3479kB/s)(41.0MiB/12356msec) 00:16:26.260 slat (usec): min=1008, max=3580.0k, avg=249432.62, stdev=768096.20 00:16:26.260 clat (msec): min=2128, max=12351, avg=11061.64, stdev=2665.12 00:16:26.260 lat (msec): min=4220, max=12355, avg=11311.08, stdev=2255.16 00:16:26.260 clat percentiles (msec): 00:16:26.260 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6409], 20.00th=[12147], 00:16:26.260 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:16:26.260 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.260 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:16:26.260 | 99.99th=[12416] 00:16:26.260 lat (msec) : >=2000=100.00% 00:16:26.260 cpu : usr=0.00%, sys=0.39%, ctx=63, majf=0, minf=10497 00:16:26.260 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:16:26.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.261 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.261 job0: (groupid=0, jobs=1): err= 0: pid=643055: Thu Oct 17 17:41:02 2024 00:16:26.261 read: IOPS=0, BW=669KiB/s (686kB/s)(8192KiB/12236msec) 00:16:26.261 slat (usec): min=1511, max=4293.0k, avg=1269558.82, stdev=1562483.94 00:16:26.261 clat (msec): min=2079, max=12233, avg=8027.41, stdev=4262.31 00:16:26.261 lat (msec): min=4202, max=12235, avg=9296.97, stdev=3714.90 00:16:26.261 clat percentiles (msec): 00:16:26.261 | 1.00th=[ 2072], 5.00th=[ 2072], 10.00th=[ 2072], 20.00th=[ 4212], 00:16:26.261 | 30.00th=[ 4212], 40.00th=[ 6342], 50.00th=[ 6342], 60.00th=[10671], 00:16:26.261 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.261 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.261 | 99.99th=[12281] 00:16:26.261 lat (msec) : >=2000=100.00% 00:16:26.261 cpu : usr=0.00%, sys=0.06%, ctx=14, majf=0, minf=2049 00:16:26.261 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 complete : 0=0.0%, 4=0.0%, 8=100.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 issued rwts: total=8,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.261 job0: (groupid=0, jobs=1): err= 0: pid=643056: Thu Oct 17 17:41:02 2024 00:16:26.261 read: IOPS=90, BW=90.2MiB/s (94.6MB/s)(1113MiB/12343msec) 00:16:26.261 slat (usec): min=53, max=2087.2k, avg=9154.64, stdev=106979.21 00:16:26.261 clat (msec): min=350, max=12204, avg=1375.41, stdev=2561.98 00:16:26.261 lat (msec): min=351, max=12210, avg=1384.57, stdev=2570.92 00:16:26.261 clat percentiles (msec): 00:16:26.261 | 1.00th=[ 359], 5.00th=[ 388], 10.00th=[ 401], 20.00th=[ 405], 00:16:26.261 | 30.00th=[ 405], 40.00th=[ 409], 50.00th=[ 414], 60.00th=[ 418], 00:16:26.261 | 70.00th=[ 435], 80.00th=[ 502], 90.00th=[ 6409], 95.00th=[ 8658], 00:16:26.261 | 99.00th=[ 8792], 99.50th=[ 8926], 99.90th=[12147], 99.95th=[12147], 00:16:26.261 | 99.99th=[12147] 00:16:26.261 bw ( KiB/s): min= 2031, max=321536, per=6.10%, avg=183479.18, stdev=142481.39, samples=11 00:16:26.261 iops : min= 1, max= 314, avg=178.91, stdev=139.34, samples=11 00:16:26.261 lat (msec) : 500=77.00%, 750=9.97%, >=2000=13.03% 00:16:26.261 cpu : usr=0.05%, sys=2.10%, ctx=944, majf=0, minf=32770 00:16:26.261 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:16:26.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.261 issued rwts: total=1113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.261 job0: (groupid=0, jobs=1): err= 0: pid=643057: Thu Oct 17 17:41:02 2024 00:16:26.261 read: IOPS=3, BW=3999KiB/s (4095kB/s)(48.0MiB/12291msec) 00:16:26.261 slat (usec): min=979, max=2147.3k, avg=211746.14, stdev=613231.48 00:16:26.261 clat (msec): min=2126, max=12286, avg=10880.90, stdev=2815.52 00:16:26.261 lat (msec): min=4202, max=12290, avg=11092.65, stdev=2508.59 00:16:26.261 clat percentiles (msec): 00:16:26.261 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[10671], 00:16:26.261 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:16:26.261 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.261 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.261 | 99.99th=[12281] 00:16:26.261 lat (msec) : >=2000=100.00% 00:16:26.261 cpu : usr=0.00%, sys=0.41%, ctx=51, majf=0, minf=12289 00:16:26.261 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:16:26.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.261 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.261 job1: (groupid=0, jobs=1): err= 0: pid=643058: Thu Oct 17 17:41:02 2024 00:16:26.261 read: IOPS=4, BW=4273KiB/s (4376kB/s)(51.0MiB/12222msec) 00:16:26.261 slat (usec): min=946, max=2099.0k, avg=198267.05, stdev=589965.10 00:16:26.261 clat (msec): min=2109, max=12217, avg=9830.33, stdev=3011.71 00:16:26.261 lat (msec): min=4190, max=12221, avg=10028.60, stdev=2820.02 00:16:26.261 clat percentiles (msec): 00:16:26.261 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6409], 00:16:26.261 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[12013], 60.00th=[12147], 00:16:26.261 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:26.261 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.261 | 99.99th=[12281] 00:16:26.261 lat (msec) : >=2000=100.00% 00:16:26.261 cpu : usr=0.00%, sys=0.43%, ctx=52, majf=0, minf=13057 00:16:26.261 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:16:26.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.261 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.261 job1: (groupid=0, jobs=1): err= 0: pid=643059: Thu Oct 17 17:41:02 2024 00:16:26.261 read: IOPS=2, BW=2102KiB/s (2153kB/s)(25.0MiB/12177msec) 00:16:26.261 slat (usec): min=903, max=2098.0k, avg=402372.64, stdev=800082.60 00:16:26.261 clat (msec): min=2117, max=12174, avg=7944.34, stdev=3339.87 00:16:26.261 lat (msec): min=4190, max=12176, avg=8346.71, stdev=3212.13 00:16:26.261 clat percentiles (msec): 00:16:26.261 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4212], 00:16:26.261 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 8490], 00:16:26.261 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12013], 95.00th=[12147], 00:16:26.261 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.261 | 99.99th=[12147] 00:16:26.261 lat (msec) : >=2000=100.00% 00:16:26.261 cpu : usr=0.00%, sys=0.21%, ctx=45, majf=0, minf=6401 00:16:26.261 IO depths : 1=4.0%, 2=8.0%, 4=16.0%, 8=32.0%, 16=40.0%, 32=0.0%, >=64=0.0% 00:16:26.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.261 issued rwts: total=25,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.261 job1: (groupid=0, jobs=1): err= 0: pid=643060: Thu Oct 17 17:41:02 2024 00:16:26.261 read: IOPS=83, BW=83.7MiB/s (87.8MB/s)(1030MiB/12303msec) 00:16:26.261 slat (usec): min=43, max=2184.6k, avg=9889.39, stdev=130548.09 00:16:26.261 clat (msec): min=103, max=12208, avg=1479.55, stdev=3348.33 00:16:26.261 lat (msec): min=104, max=12219, avg=1489.44, stdev=3360.60 00:16:26.261 clat percentiles (msec): 00:16:26.261 | 1.00th=[ 108], 5.00th=[ 124], 10.00th=[ 125], 20.00th=[ 125], 00:16:26.261 | 30.00th=[ 125], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 127], 00:16:26.261 | 70.00th=[ 128], 80.00th=[ 338], 90.00th=[ 8490], 95.00th=[10805], 00:16:26.261 | 99.00th=[10805], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.261 | 99.99th=[12147] 00:16:26.261 bw ( KiB/s): min= 1422, max=970858, per=8.78%, avg=263814.00, stdev=412377.50, samples=7 00:16:26.261 iops : min= 1, max= 948, avg=257.29, stdev=402.91, samples=7 00:16:26.261 lat (msec) : 250=77.18%, 500=3.79%, 750=3.98%, >=2000=15.05% 00:16:26.261 cpu : usr=0.02%, sys=1.41%, ctx=1001, majf=0, minf=32769 00:16:26.261 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:16:26.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.261 issued rwts: total=1030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.261 job1: (groupid=0, jobs=1): err= 0: pid=643061: Thu Oct 17 17:41:02 2024 00:16:26.261 read: IOPS=2, BW=2442KiB/s (2500kB/s)(29.0MiB/12162msec) 00:16:26.261 slat (usec): min=990, max=2101.5k, avg=346680.55, stdev=750705.21 00:16:26.261 clat (msec): min=2107, max=12155, avg=9421.36, stdev=3004.09 00:16:26.261 lat (msec): min=4208, max=12161, avg=9768.04, stdev=2694.01 00:16:26.261 clat percentiles (msec): 00:16:26.261 | 1.00th=[ 2106], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:16:26.261 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12013], 00:16:26.261 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:26.261 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.261 | 99.99th=[12147] 00:16:26.261 lat (msec) : >=2000=100.00% 00:16:26.261 cpu : usr=0.00%, sys=0.25%, ctx=49, majf=0, minf=7425 00:16:26.261 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:16:26.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.261 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.261 job1: (groupid=0, jobs=1): err= 0: pid=643062: Thu Oct 17 17:41:02 2024 00:16:26.261 read: IOPS=4, BW=5065KiB/s (5186kB/s)(61.0MiB/12333msec) 00:16:26.261 slat (usec): min=959, max=2114.8k, avg=167583.60, stdev=547634.28 00:16:26.261 clat (msec): min=2109, max=12329, avg=10558.39, stdev=2928.72 00:16:26.261 lat (msec): min=4215, max=12332, avg=10725.98, stdev=2722.41 00:16:26.261 clat percentiles (msec): 00:16:26.261 | 1.00th=[ 2106], 5.00th=[ 4245], 10.00th=[ 4279], 20.00th=[ 8557], 00:16:26.261 | 30.00th=[12013], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:16:26.261 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.261 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.261 | 99.99th=[12281] 00:16:26.261 lat (msec) : >=2000=100.00% 00:16:26.261 cpu : usr=0.01%, sys=0.53%, ctx=64, majf=0, minf=15617 00:16:26.261 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:16:26.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.261 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.261 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.261 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.261 job1: (groupid=0, jobs=1): err= 0: pid=643063: Thu Oct 17 17:41:02 2024 00:16:26.261 read: IOPS=3, BW=3832KiB/s (3924kB/s)(46.0MiB/12291msec) 00:16:26.261 slat (usec): min=884, max=2127.3k, avg=221368.42, stdev=626861.15 00:16:26.261 clat (msec): min=2107, max=12285, avg=10970.75, stdev=2498.36 00:16:26.261 lat (msec): min=4208, max=12290, avg=11192.12, stdev=2117.70 00:16:26.261 clat percentiles (msec): 00:16:26.261 | 1.00th=[ 2106], 5.00th=[ 6342], 10.00th=[ 6342], 20.00th=[10671], 00:16:26.261 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:16:26.261 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.262 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.262 | 99.99th=[12281] 00:16:26.262 lat (msec) : >=2000=100.00% 00:16:26.262 cpu : usr=0.00%, sys=0.45%, ctx=66, majf=0, minf=11777 00:16:26.262 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:16:26.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.262 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.262 job1: (groupid=0, jobs=1): err= 0: pid=643064: Thu Oct 17 17:41:02 2024 00:16:26.262 read: IOPS=32, BW=32.5MiB/s (34.0MB/s)(397MiB/12227msec) 00:16:26.262 slat (usec): min=49, max=2093.4k, avg=25442.97, stdev=190410.19 00:16:26.262 clat (msec): min=499, max=8532, avg=2280.87, stdev=1884.40 00:16:26.262 lat (msec): min=503, max=8554, avg=2306.32, stdev=1913.81 00:16:26.262 clat percentiles (msec): 00:16:26.262 | 1.00th=[ 502], 5.00th=[ 550], 10.00th=[ 550], 20.00th=[ 550], 00:16:26.262 | 30.00th=[ 550], 40.00th=[ 1418], 50.00th=[ 1469], 60.00th=[ 1519], 00:16:26.262 | 70.00th=[ 3943], 80.00th=[ 4077], 90.00th=[ 4212], 95.00th=[ 6208], 00:16:26.262 | 99.00th=[ 8490], 99.50th=[ 8490], 99.90th=[ 8557], 99.95th=[ 8557], 00:16:26.262 | 99.99th=[ 8557] 00:16:26.262 bw ( KiB/s): min= 1595, max=229376, per=3.06%, avg=92012.83, stdev=107313.37, samples=6 00:16:26.262 iops : min= 1, max= 224, avg=89.67, stdev=104.76, samples=6 00:16:26.262 lat (msec) : 500=0.25%, 750=35.26%, 2000=27.20%, >=2000=37.28% 00:16:26.262 cpu : usr=0.00%, sys=1.19%, ctx=285, majf=0, minf=32769 00:16:26.262 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.1%, >=64=84.1% 00:16:26.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:26.262 issued rwts: total=397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.262 job1: (groupid=0, jobs=1): err= 0: pid=643065: Thu Oct 17 17:41:02 2024 00:16:26.262 read: IOPS=31, BW=31.1MiB/s (32.6MB/s)(377MiB/12140msec) 00:16:26.262 slat (usec): min=51, max=2072.3k, avg=26541.02, stdev=210152.14 00:16:26.262 clat (msec): min=401, max=11022, avg=3928.86, stdev=4497.79 00:16:26.262 lat (msec): min=402, max=11023, avg=3955.40, stdev=4509.64 00:16:26.262 clat percentiles (msec): 00:16:26.262 | 1.00th=[ 401], 5.00th=[ 405], 10.00th=[ 405], 20.00th=[ 409], 00:16:26.262 | 30.00th=[ 409], 40.00th=[ 430], 50.00th=[ 502], 60.00th=[ 2601], 00:16:26.262 | 70.00th=[ 6812], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:16:26.262 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:16:26.262 | 99.99th=[11073] 00:16:26.262 bw ( KiB/s): min= 2048, max=270336, per=2.43%, avg=73011.86, stdev=104868.80, samples=7 00:16:26.262 iops : min= 2, max= 264, avg=71.29, stdev=102.42, samples=7 00:16:26.262 lat (msec) : 500=50.13%, 750=7.16%, >=2000=42.71% 00:16:26.262 cpu : usr=0.02%, sys=1.06%, ctx=360, majf=0, minf=32769 00:16:26.262 IO depths : 1=0.3%, 2=0.5%, 4=1.1%, 8=2.1%, 16=4.2%, 32=8.5%, >=64=83.3% 00:16:26.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:16:26.262 issued rwts: total=377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.262 job1: (groupid=0, jobs=1): err= 0: pid=643066: Thu Oct 17 17:41:02 2024 00:16:26.262 read: IOPS=5, BW=5257KiB/s (5383kB/s)(63.0MiB/12271msec) 00:16:26.262 slat (usec): min=790, max=2091.4k, avg=160754.63, stdev=529205.09 00:16:26.262 clat (msec): min=2142, max=12265, avg=9886.29, stdev=3297.80 00:16:26.262 lat (msec): min=4179, max=12270, avg=10047.04, stdev=3158.11 00:16:26.262 clat percentiles (msec): 00:16:26.262 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:26.262 | 30.00th=[ 8490], 40.00th=[12013], 50.00th=[12147], 60.00th=[12147], 00:16:26.262 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.262 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.262 | 99.99th=[12281] 00:16:26.262 lat (msec) : >=2000=100.00% 00:16:26.262 cpu : usr=0.00%, sys=0.51%, ctx=74, majf=0, minf=16129 00:16:26.262 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:16:26.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.262 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.262 job1: (groupid=0, jobs=1): err= 0: pid=643067: Thu Oct 17 17:41:02 2024 00:16:26.262 read: IOPS=2, BW=2445KiB/s (2503kB/s)(29.0MiB/12147msec) 00:16:26.262 slat (usec): min=985, max=2094.2k, avg=344877.92, stdev=742279.60 00:16:26.262 clat (msec): min=2144, max=12142, avg=7140.87, stdev=3216.65 00:16:26.262 lat (msec): min=4175, max=12146, avg=7485.74, stdev=3197.93 00:16:26.262 clat percentiles (msec): 00:16:26.262 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4212], 00:16:26.262 | 30.00th=[ 4245], 40.00th=[ 4279], 50.00th=[ 6342], 60.00th=[ 8490], 00:16:26.262 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[12147], 95.00th=[12147], 00:16:26.262 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.262 | 99.99th=[12147] 00:16:26.262 lat (msec) : >=2000=100.00% 00:16:26.262 cpu : usr=0.01%, sys=0.24%, ctx=40, majf=0, minf=7425 00:16:26.262 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:16:26.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.262 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.262 job1: (groupid=0, jobs=1): err= 0: pid=643068: Thu Oct 17 17:41:02 2024 00:16:26.262 read: IOPS=2, BW=2779KiB/s (2846kB/s)(33.0MiB/12159msec) 00:16:26.262 slat (usec): min=935, max=2090.0k, avg=303426.05, stdev=705538.51 00:16:26.262 clat (msec): min=2145, max=12153, avg=7181.85, stdev=3127.97 00:16:26.262 lat (msec): min=4176, max=12158, avg=7485.27, stdev=3109.71 00:16:26.262 clat percentiles (msec): 00:16:26.262 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:16:26.262 | 30.00th=[ 4245], 40.00th=[ 6342], 50.00th=[ 6342], 60.00th=[ 6409], 00:16:26.262 | 70.00th=[ 8557], 80.00th=[12013], 90.00th=[12013], 95.00th=[12013], 00:16:26.262 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.262 | 99.99th=[12147] 00:16:26.262 lat (msec) : >=2000=100.00% 00:16:26.262 cpu : usr=0.01%, sys=0.24%, ctx=47, majf=0, minf=8449 00:16:26.262 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:16:26.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.262 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.262 job1: (groupid=0, jobs=1): err= 0: pid=643069: Thu Oct 17 17:41:02 2024 00:16:26.262 read: IOPS=87, BW=87.9MiB/s (92.1MB/s)(1083MiB/12324msec) 00:16:26.262 slat (usec): min=47, max=4258.8k, avg=9423.64, stdev=143881.33 00:16:26.262 clat (msec): min=355, max=8955, avg=1407.82, stdev=2647.49 00:16:26.262 lat (msec): min=357, max=8963, avg=1417.24, stdev=2656.22 00:16:26.262 clat percentiles (msec): 00:16:26.262 | 1.00th=[ 363], 5.00th=[ 397], 10.00th=[ 405], 20.00th=[ 409], 00:16:26.262 | 30.00th=[ 414], 40.00th=[ 418], 50.00th=[ 422], 60.00th=[ 426], 00:16:26.262 | 70.00th=[ 439], 80.00th=[ 506], 90.00th=[ 8658], 95.00th=[ 8792], 00:16:26.262 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:16:26.262 | 99.99th=[ 8926] 00:16:26.262 bw ( KiB/s): min= 2052, max=315392, per=7.23%, avg=217439.44, stdev=128228.30, samples=9 00:16:26.262 iops : min= 2, max= 308, avg=212.22, stdev=125.23, samples=9 00:16:26.262 lat (msec) : 500=74.61%, 750=12.93%, >=2000=12.47% 00:16:26.262 cpu : usr=0.06%, sys=1.82%, ctx=948, majf=0, minf=32769 00:16:26.262 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:16:26.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.262 issued rwts: total=1083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.262 job1: (groupid=0, jobs=1): err= 0: pid=643070: Thu Oct 17 17:41:02 2024 00:16:26.262 read: IOPS=1, BW=1180KiB/s (1208kB/s)(14.0MiB/12154msec) 00:16:26.262 slat (msec): min=12, max=2110, avg=717.57, stdev=979.87 00:16:26.262 clat (msec): min=2107, max=10660, avg=7285.74, stdev=2621.16 00:16:26.262 lat (msec): min=4189, max=12153, avg=8003.31, stdev=2464.94 00:16:26.262 clat percentiles (msec): 00:16:26.262 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4178], 20.00th=[ 4212], 00:16:26.262 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:16:26.262 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:26.262 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:26.262 | 99.99th=[10671] 00:16:26.262 lat (msec) : >=2000=100.00% 00:16:26.262 cpu : usr=0.00%, sys=0.11%, ctx=34, majf=0, minf=3585 00:16:26.262 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.262 job2: (groupid=0, jobs=1): err= 0: pid=643071: Thu Oct 17 17:41:02 2024 00:16:26.262 read: IOPS=8, BW=9187KiB/s (9408kB/s)(111MiB/12372msec) 00:16:26.262 slat (usec): min=967, max=2082.2k, avg=92120.32, stdev=406253.01 00:16:26.262 clat (msec): min=2146, max=12370, avg=10706.59, stdev=2742.54 00:16:26.262 lat (msec): min=4214, max=12371, avg=10798.71, stdev=2621.45 00:16:26.262 clat percentiles (msec): 00:16:26.262 | 1.00th=[ 4212], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8557], 00:16:26.262 | 30.00th=[10671], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:16:26.262 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12416], 00:16:26.262 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:16:26.262 | 99.99th=[12416] 00:16:26.262 lat (msec) : >=2000=100.00% 00:16:26.262 cpu : usr=0.00%, sys=0.98%, ctx=102, majf=0, minf=28417 00:16:26.262 IO depths : 1=0.9%, 2=1.8%, 4=3.6%, 8=7.2%, 16=14.4%, 32=28.8%, >=64=43.2% 00:16:26.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.262 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:26.262 issued rwts: total=111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.262 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.262 job2: (groupid=0, jobs=1): err= 0: pid=643072: Thu Oct 17 17:41:02 2024 00:16:26.262 read: IOPS=2, BW=2929KiB/s (3000kB/s)(35.0MiB/12235msec) 00:16:26.262 slat (usec): min=918, max=2097.5k, avg=289587.22, stdev=693644.46 00:16:26.263 clat (msec): min=2098, max=12233, avg=9672.75, stdev=3189.95 00:16:26.263 lat (msec): min=4163, max=12234, avg=9962.34, stdev=2931.75 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:16:26.263 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[12013], 60.00th=[12147], 00:16:26.263 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.263 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.263 | 99.99th=[12281] 00:16:26.263 lat (msec) : >=2000=100.00% 00:16:26.263 cpu : usr=0.00%, sys=0.33%, ctx=52, majf=0, minf=8961 00:16:26.263 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:16:26.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.263 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.263 job2: (groupid=0, jobs=1): err= 0: pid=643073: Thu Oct 17 17:41:02 2024 00:16:26.263 read: IOPS=3, BW=3527KiB/s (3612kB/s)(35.0MiB/10162msec) 00:16:26.263 slat (usec): min=949, max=2100.2k, avg=286224.96, stdev=685509.30 00:16:26.263 clat (msec): min=143, max=10160, avg=5262.93, stdev=3254.73 00:16:26.263 lat (msec): min=2148, max=10161, avg=5549.15, stdev=3231.69 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 144], 5.00th=[ 2165], 10.00th=[ 2165], 20.00th=[ 2198], 00:16:26.263 | 30.00th=[ 2232], 40.00th=[ 2265], 50.00th=[ 4329], 60.00th=[ 6477], 00:16:26.263 | 70.00th=[ 6544], 80.00th=[ 8658], 90.00th=[10134], 95.00th=[10134], 00:16:26.263 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:16:26.263 | 99.99th=[10134] 00:16:26.263 lat (msec) : 250=2.86%, >=2000=97.14% 00:16:26.263 cpu : usr=0.03%, sys=0.31%, ctx=61, majf=0, minf=8961 00:16:26.263 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:16:26.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.263 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.263 job2: (groupid=0, jobs=1): err= 0: pid=643074: Thu Oct 17 17:41:02 2024 00:16:26.263 read: IOPS=4, BW=5001KiB/s (5121kB/s)(60.0MiB/12286msec) 00:16:26.263 slat (usec): min=862, max=2072.0k, avg=169386.66, stdev=543802.20 00:16:26.263 clat (msec): min=2121, max=12282, avg=10410.93, stdev=2985.83 00:16:26.263 lat (msec): min=4176, max=12285, avg=10580.32, stdev=2789.44 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6409], 00:16:26.263 | 30.00th=[10671], 40.00th=[12147], 50.00th=[12147], 60.00th=[12147], 00:16:26.263 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.263 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.263 | 99.99th=[12281] 00:16:26.263 lat (msec) : >=2000=100.00% 00:16:26.263 cpu : usr=0.00%, sys=0.56%, ctx=83, majf=0, minf=15361 00:16:26.263 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:16:26.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.263 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.263 job2: (groupid=0, jobs=1): err= 0: pid=643075: Thu Oct 17 17:41:02 2024 00:16:26.263 read: IOPS=2, BW=2427KiB/s (2485kB/s)(29.0MiB/12236msec) 00:16:26.263 slat (usec): min=1072, max=2105.5k, avg=348524.54, stdev=754620.82 00:16:26.263 clat (msec): min=2128, max=12231, avg=9462.08, stdev=3360.30 00:16:26.263 lat (msec): min=4214, max=12235, avg=9810.60, stdev=3085.42 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:26.263 | 30.00th=[ 6409], 40.00th=[ 8557], 50.00th=[12013], 60.00th=[12147], 00:16:26.263 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.263 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.263 | 99.99th=[12281] 00:16:26.263 lat (msec) : >=2000=100.00% 00:16:26.263 cpu : usr=0.01%, sys=0.24%, ctx=51, majf=0, minf=7425 00:16:26.263 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:16:26.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.263 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.263 job2: (groupid=0, jobs=1): err= 0: pid=643076: Thu Oct 17 17:41:02 2024 00:16:26.263 read: IOPS=1, BW=1934KiB/s (1980kB/s)(23.0MiB/12178msec) 00:16:26.263 slat (usec): min=938, max=2088.3k, avg=437029.54, stdev=826479.90 00:16:26.263 clat (msec): min=2126, max=10679, avg=7306.49, stdev=2647.63 00:16:26.263 lat (msec): min=4194, max=12177, avg=7743.52, stdev=2582.46 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4245], 00:16:26.263 | 30.00th=[ 4245], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8490], 00:16:26.263 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[10671], 00:16:26.263 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:16:26.263 | 99.99th=[10671] 00:16:26.263 lat (msec) : >=2000=100.00% 00:16:26.263 cpu : usr=0.00%, sys=0.17%, ctx=40, majf=0, minf=5889 00:16:26.263 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:16:26.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.263 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.263 job2: (groupid=0, jobs=1): err= 0: pid=643077: Thu Oct 17 17:41:02 2024 00:16:26.263 read: IOPS=6, BW=6727KiB/s (6888kB/s)(81.0MiB/12330msec) 00:16:26.263 slat (usec): min=970, max=2096.8k, avg=125964.29, stdev=476184.42 00:16:26.263 clat (msec): min=2126, max=12328, avg=10882.10, stdev=2630.39 00:16:26.263 lat (msec): min=4184, max=12329, avg=11008.06, stdev=2443.51 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[10671], 00:16:26.263 | 30.00th=[12147], 40.00th=[12147], 50.00th=[12281], 60.00th=[12281], 00:16:26.263 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.263 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.263 | 99.99th=[12281] 00:16:26.263 lat (msec) : >=2000=100.00% 00:16:26.263 cpu : usr=0.00%, sys=0.73%, ctx=88, majf=0, minf=20737 00:16:26.263 IO depths : 1=1.2%, 2=2.5%, 4=4.9%, 8=9.9%, 16=19.8%, 32=39.5%, >=64=22.2% 00:16:26.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:26.263 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.263 job2: (groupid=0, jobs=1): err= 0: pid=643078: Thu Oct 17 17:41:02 2024 00:16:26.263 read: IOPS=3, BW=3907KiB/s (4001kB/s)(39.0MiB/10221msec) 00:16:26.263 slat (usec): min=959, max=2068.6k, avg=258483.88, stdev=653974.10 00:16:26.263 clat (msec): min=139, max=10217, avg=7432.24, stdev=3251.73 00:16:26.263 lat (msec): min=2167, max=10220, avg=7690.72, stdev=3051.22 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 140], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:16:26.263 | 30.00th=[ 6477], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10134], 00:16:26.263 | 70.00th=[10134], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:16:26.263 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:16:26.263 | 99.99th=[10268] 00:16:26.263 lat (msec) : 250=2.56%, >=2000=97.44% 00:16:26.263 cpu : usr=0.00%, sys=0.44%, ctx=59, majf=0, minf=9985 00:16:26.263 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:16:26.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.263 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.263 job2: (groupid=0, jobs=1): err= 0: pid=643079: Thu Oct 17 17:41:02 2024 00:16:26.263 read: IOPS=4, BW=4598KiB/s (4708kB/s)(46.0MiB/10245msec) 00:16:26.263 slat (usec): min=812, max=2080.0k, avg=219948.11, stdev=609050.20 00:16:26.263 clat (msec): min=126, max=10239, avg=7805.80, stdev=3160.82 00:16:26.263 lat (msec): min=2156, max=10244, avg=8025.75, stdev=2960.24 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 127], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4396], 00:16:26.263 | 30.00th=[ 6477], 40.00th=[ 8658], 50.00th=[10000], 60.00th=[10268], 00:16:26.263 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:16:26.263 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:16:26.263 | 99.99th=[10268] 00:16:26.263 lat (msec) : 250=2.17%, >=2000=97.83% 00:16:26.263 cpu : usr=0.00%, sys=0.50%, ctx=72, majf=0, minf=11777 00:16:26.263 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:16:26.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.263 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.263 job2: (groupid=0, jobs=1): err= 0: pid=643080: Thu Oct 17 17:41:02 2024 00:16:26.263 read: IOPS=1, BW=1180KiB/s (1208kB/s)(14.0MiB/12152msec) 00:16:26.263 slat (msec): min=13, max=3597, avg=716.38, stdev=1192.00 00:16:26.263 clat (msec): min=2121, max=8553, avg=5602.78, stdev=1803.73 00:16:26.263 lat (msec): min=4177, max=12151, avg=6319.15, stdev=2251.11 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4178], 20.00th=[ 4212], 00:16:26.263 | 30.00th=[ 4245], 40.00th=[ 4245], 50.00th=[ 6342], 60.00th=[ 6342], 00:16:26.263 | 70.00th=[ 6409], 80.00th=[ 6409], 90.00th=[ 8490], 95.00th=[ 8557], 00:16:26.263 | 99.00th=[ 8557], 99.50th=[ 8557], 99.90th=[ 8557], 99.95th=[ 8557], 00:16:26.263 | 99.99th=[ 8557] 00:16:26.263 lat (msec) : >=2000=100.00% 00:16:26.263 cpu : usr=0.00%, sys=0.13%, ctx=45, majf=0, minf=3585 00:16:26.263 IO depths : 1=7.1%, 2=14.3%, 4=28.6%, 8=50.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:26.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.263 issued rwts: total=14,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.263 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.263 job2: (groupid=0, jobs=1): err= 0: pid=643081: Thu Oct 17 17:41:02 2024 00:16:26.263 read: IOPS=2, BW=2352KiB/s (2409kB/s)(28.0MiB/12189msec) 00:16:26.263 slat (msec): min=3, max=2105, avg=360.36, stdev=761.16 00:16:26.263 clat (msec): min=2098, max=12118, avg=8920.48, stdev=2896.80 00:16:26.263 lat (msec): min=4175, max=12188, avg=9280.84, stdev=2632.25 00:16:26.263 clat percentiles (msec): 00:16:26.263 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:16:26.263 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:16:26.264 | 70.00th=[10671], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:26.264 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.264 | 99.99th=[12147] 00:16:26.264 lat (msec) : >=2000=100.00% 00:16:26.264 cpu : usr=0.00%, sys=0.23%, ctx=57, majf=0, minf=7169 00:16:26.264 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:16:26.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.264 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.264 job2: (groupid=0, jobs=1): err= 0: pid=643082: Thu Oct 17 17:41:02 2024 00:16:26.264 read: IOPS=2, BW=2672KiB/s (2736kB/s)(32.0MiB/12264msec) 00:16:26.264 slat (usec): min=997, max=2096.0k, avg=316467.70, stdev=727719.16 00:16:26.264 clat (msec): min=2135, max=12260, avg=9617.47, stdev=3193.58 00:16:26.264 lat (msec): min=4223, max=12262, avg=9933.94, stdev=2918.16 00:16:26.264 clat percentiles (msec): 00:16:26.264 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:26.264 | 30.00th=[ 8557], 40.00th=[ 8658], 50.00th=[10671], 60.00th=[12281], 00:16:26.264 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.264 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.264 | 99.99th=[12281] 00:16:26.264 lat (msec) : >=2000=100.00% 00:16:26.264 cpu : usr=0.00%, sys=0.27%, ctx=50, majf=0, minf=8193 00:16:26.264 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:16:26.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.264 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.264 job2: (groupid=0, jobs=1): err= 0: pid=643083: Thu Oct 17 17:41:02 2024 00:16:26.264 read: IOPS=1, BW=1349KiB/s (1382kB/s)(16.0MiB/12141msec) 00:16:26.264 slat (msec): min=16, max=2091, avg=626.76, stdev=930.11 00:16:26.264 clat (msec): min=2112, max=12080, avg=7391.04, stdev=2846.97 00:16:26.264 lat (msec): min=4174, max=12140, avg=8017.80, stdev=2707.92 00:16:26.264 clat percentiles (msec): 00:16:26.264 | 1.00th=[ 2106], 5.00th=[ 2106], 10.00th=[ 4178], 20.00th=[ 4245], 00:16:26.264 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:16:26.264 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[12147], 00:16:26.264 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.264 | 99.99th=[12147] 00:16:26.264 lat (msec) : >=2000=100.00% 00:16:26.264 cpu : usr=0.00%, sys=0.16%, ctx=40, majf=0, minf=4097 00:16:26.264 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:16:26.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.264 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.264 job3: (groupid=0, jobs=1): err= 0: pid=643084: Thu Oct 17 17:41:02 2024 00:16:26.264 read: IOPS=4, BW=4276KiB/s (4379kB/s)(51.0MiB/12213msec) 00:16:26.264 slat (usec): min=765, max=2116.7k, avg=197813.33, stdev=583383.50 00:16:26.264 clat (msec): min=2123, max=12208, avg=10045.22, stdev=2698.57 00:16:26.264 lat (msec): min=4191, max=12212, avg=10243.03, stdev=2466.00 00:16:26.264 clat percentiles (msec): 00:16:26.264 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:16:26.264 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12147], 00:16:26.264 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:26.264 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.264 | 99.99th=[12147] 00:16:26.264 lat (msec) : >=2000=100.00% 00:16:26.264 cpu : usr=0.00%, sys=0.42%, ctx=63, majf=0, minf=13057 00:16:26.264 IO depths : 1=2.0%, 2=3.9%, 4=7.8%, 8=15.7%, 16=31.4%, 32=39.2%, >=64=0.0% 00:16:26.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.264 issued rwts: total=51,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.264 job3: (groupid=0, jobs=1): err= 0: pid=643085: Thu Oct 17 17:41:02 2024 00:16:26.264 read: IOPS=3, BW=3514KiB/s (3599kB/s)(42.0MiB/12238msec) 00:16:26.264 slat (usec): min=946, max=2136.3k, avg=241056.11, stdev=639826.77 00:16:26.264 clat (msec): min=2112, max=12228, avg=9651.17, stdev=3300.35 00:16:26.264 lat (msec): min=4162, max=12237, avg=9892.23, stdev=3099.98 00:16:26.264 clat percentiles (msec): 00:16:26.264 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 6342], 00:16:26.264 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[12013], 00:16:26.264 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:26.264 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.264 | 99.99th=[12281] 00:16:26.264 lat (msec) : >=2000=100.00% 00:16:26.264 cpu : usr=0.00%, sys=0.36%, ctx=61, majf=0, minf=10753 00:16:26.264 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:16:26.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.264 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.264 job3: (groupid=0, jobs=1): err= 0: pid=643086: Thu Oct 17 17:41:02 2024 00:16:26.264 read: IOPS=3, BW=3693KiB/s (3782kB/s)(44.0MiB/12200msec) 00:16:26.264 slat (usec): min=963, max=2098.1k, avg=228598.11, stdev=622070.30 00:16:26.264 clat (msec): min=2141, max=12196, avg=8687.05, stdev=3269.96 00:16:26.264 lat (msec): min=4186, max=12199, avg=8915.65, stdev=3151.12 00:16:26.264 clat percentiles (msec): 00:16:26.264 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 4279], 00:16:26.264 | 30.00th=[ 6409], 40.00th=[ 6409], 50.00th=[ 8557], 60.00th=[10671], 00:16:26.264 | 70.00th=[12147], 80.00th=[12147], 90.00th=[12147], 95.00th=[12147], 00:16:26.264 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.264 | 99.99th=[12147] 00:16:26.264 lat (msec) : >=2000=100.00% 00:16:26.264 cpu : usr=0.00%, sys=0.35%, ctx=54, majf=0, minf=11265 00:16:26.264 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:16:26.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.264 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.264 job3: (groupid=0, jobs=1): err= 0: pid=643087: Thu Oct 17 17:41:02 2024 00:16:26.264 read: IOPS=2, BW=2759KiB/s (2826kB/s)(33.0MiB/12246msec) 00:16:26.264 slat (usec): min=1001, max=2126.6k, avg=307141.37, stdev=711939.84 00:16:26.264 clat (msec): min=2109, max=12242, avg=10454.89, stdev=2851.36 00:16:26.264 lat (msec): min=4167, max=12245, avg=10762.04, stdev=2440.63 00:16:26.264 clat percentiles (msec): 00:16:26.264 | 1.00th=[ 2106], 5.00th=[ 4178], 10.00th=[ 6342], 20.00th=[ 8490], 00:16:26.264 | 30.00th=[10671], 40.00th=[12013], 50.00th=[12147], 60.00th=[12281], 00:16:26.264 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.264 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.264 | 99.99th=[12281] 00:16:26.264 lat (msec) : >=2000=100.00% 00:16:26.264 cpu : usr=0.00%, sys=0.31%, ctx=69, majf=0, minf=8449 00:16:26.264 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:16:26.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.264 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.264 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.264 job3: (groupid=0, jobs=1): err= 0: pid=643088: Thu Oct 17 17:41:02 2024 00:16:26.264 read: IOPS=19, BW=19.4MiB/s (20.3MB/s)(237MiB/12244msec) 00:16:26.264 slat (usec): min=41, max=2069.3k, avg=42643.46, stdev=267420.84 00:16:26.264 clat (msec): min=277, max=10454, avg=5780.03, stdev=4264.48 00:16:26.264 lat (msec): min=278, max=10455, avg=5822.67, stdev=4260.29 00:16:26.264 clat percentiles (msec): 00:16:26.264 | 1.00th=[ 279], 5.00th=[ 284], 10.00th=[ 284], 20.00th=[ 372], 00:16:26.264 | 30.00th=[ 1921], 40.00th=[ 2039], 50.00th=[ 6342], 60.00th=[ 8658], 00:16:26.264 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10402], 95.00th=[10402], 00:16:26.264 | 99.00th=[10402], 99.50th=[10402], 99.90th=[10402], 99.95th=[10402], 00:16:26.264 | 99.99th=[10402] 00:16:26.264 bw ( KiB/s): min= 1582, max=141029, per=1.24%, avg=37400.50, stdev=54742.33, samples=6 00:16:26.264 iops : min= 1, max= 137, avg=36.00, stdev=53.31, samples=6 00:16:26.264 lat (msec) : 500=21.52%, 2000=14.35%, >=2000=64.14% 00:16:26.264 cpu : usr=0.02%, sys=1.01%, ctx=167, majf=0, minf=32769 00:16:26.264 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.5%, >=64=73.4% 00:16:26.264 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.264 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:16:26.264 issued rwts: total=237,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.264 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.264 job3: (groupid=0, jobs=1): err= 0: pid=643089: Thu Oct 17 17:41:02 2024 00:16:26.264 read: IOPS=2, BW=2610KiB/s (2673kB/s)(26.0MiB/10200msec) 00:16:26.265 slat (usec): min=966, max=2082.2k, avg=387888.19, stdev=773809.18 00:16:26.265 clat (msec): min=114, max=10093, avg=5053.97, stdev=2730.06 00:16:26.265 lat (msec): min=2174, max=10199, avg=5441.86, stdev=2716.59 00:16:26.265 clat percentiles (msec): 00:16:26.265 | 1.00th=[ 115], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 2232], 00:16:26.265 | 30.00th=[ 2265], 40.00th=[ 4396], 50.00th=[ 4396], 60.00th=[ 6477], 00:16:26.265 | 70.00th=[ 6477], 80.00th=[ 6544], 90.00th=[ 8658], 95.00th=[10134], 00:16:26.265 | 99.00th=[10134], 99.50th=[10134], 99.90th=[10134], 99.95th=[10134], 00:16:26.265 | 99.99th=[10134] 00:16:26.265 lat (msec) : 250=3.85%, >=2000=96.15% 00:16:26.265 cpu : usr=0.01%, sys=0.25%, ctx=55, majf=0, minf=6657 00:16:26.265 IO depths : 1=3.8%, 2=7.7%, 4=15.4%, 8=30.8%, 16=42.3%, 32=0.0%, >=64=0.0% 00:16:26.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.265 issued rwts: total=26,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.265 job3: (groupid=0, jobs=1): err= 0: pid=643090: Thu Oct 17 17:41:02 2024 00:16:26.265 read: IOPS=4, BW=4505KiB/s (4613kB/s)(54.0MiB/12275msec) 00:16:26.265 slat (usec): min=956, max=2069.1k, avg=188012.69, stdev=563329.79 00:16:26.265 clat (msec): min=2121, max=12273, avg=9891.37, stdev=3189.64 00:16:26.265 lat (msec): min=4175, max=12274, avg=10079.38, stdev=3017.63 00:16:26.265 clat percentiles (msec): 00:16:26.265 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:26.265 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12147], 60.00th=[12147], 00:16:26.265 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.265 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.265 | 99.99th=[12281] 00:16:26.265 lat (msec) : >=2000=100.00% 00:16:26.265 cpu : usr=0.00%, sys=0.50%, ctx=76, majf=0, minf=13825 00:16:26.265 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:16:26.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.265 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.265 job3: (groupid=0, jobs=1): err= 0: pid=643091: Thu Oct 17 17:41:02 2024 00:16:26.265 read: IOPS=1, BW=1940KiB/s (1987kB/s)(23.0MiB/12138msec) 00:16:26.265 slat (usec): min=822, max=2148.7k, avg=435550.73, stdev=824134.52 00:16:26.265 clat (msec): min=2119, max=12108, avg=8284.28, stdev=3337.90 00:16:26.265 lat (msec): min=4166, max=12137, avg=8719.83, stdev=3144.94 00:16:26.265 clat percentiles (msec): 00:16:26.265 | 1.00th=[ 2123], 5.00th=[ 4178], 10.00th=[ 4212], 20.00th=[ 4245], 00:16:26.265 | 30.00th=[ 6342], 40.00th=[ 6342], 50.00th=[ 8557], 60.00th=[10671], 00:16:26.265 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:16:26.265 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.265 | 99.99th=[12147] 00:16:26.265 lat (msec) : >=2000=100.00% 00:16:26.265 cpu : usr=0.00%, sys=0.18%, ctx=56, majf=0, minf=5889 00:16:26.265 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:16:26.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.265 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.265 job3: (groupid=0, jobs=1): err= 0: pid=643092: Thu Oct 17 17:41:02 2024 00:16:26.265 read: IOPS=15, BW=15.4MiB/s (16.1MB/s)(188MiB/12210msec) 00:16:26.265 slat (usec): min=114, max=2090.8k, avg=53624.97, stdev=300279.50 00:16:26.265 clat (msec): min=399, max=8305, avg=3852.25, stdev=2006.84 00:16:26.265 lat (msec): min=401, max=8322, avg=3905.88, stdev=2034.84 00:16:26.265 clat percentiles (msec): 00:16:26.265 | 1.00th=[ 401], 5.00th=[ 401], 10.00th=[ 422], 20.00th=[ 3742], 00:16:26.265 | 30.00th=[ 3775], 40.00th=[ 3842], 50.00th=[ 3910], 60.00th=[ 3977], 00:16:26.265 | 70.00th=[ 4044], 80.00th=[ 4077], 90.00th=[ 6879], 95.00th=[ 8288], 00:16:26.265 | 99.00th=[ 8288], 99.50th=[ 8288], 99.90th=[ 8288], 99.95th=[ 8288], 00:16:26.265 | 99.99th=[ 8288] 00:16:26.265 bw ( KiB/s): min= 1662, max=122880, per=2.07%, avg=62271.00, stdev=85714.07, samples=2 00:16:26.265 iops : min= 1, max= 120, avg=60.50, stdev=84.15, samples=2 00:16:26.265 lat (msec) : 500=15.96%, >=2000=84.04% 00:16:26.265 cpu : usr=0.00%, sys=1.06%, ctx=133, majf=0, minf=32769 00:16:26.265 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.3%, 16=8.5%, 32=17.0%, >=64=66.5% 00:16:26.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.265 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:16:26.265 issued rwts: total=188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.265 job3: (groupid=0, jobs=1): err= 0: pid=643093: Thu Oct 17 17:41:02 2024 00:16:26.265 read: IOPS=4, BW=4497KiB/s (4605kB/s)(54.0MiB/12296msec) 00:16:26.265 slat (usec): min=1018, max=2077.3k, avg=188290.83, stdev=564991.72 00:16:26.265 clat (msec): min=2127, max=12292, avg=10100.18, stdev=2968.17 00:16:26.265 lat (msec): min=4194, max=12295, avg=10288.47, stdev=2768.65 00:16:26.265 clat percentiles (msec): 00:16:26.265 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:16:26.265 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12013], 60.00th=[12147], 00:16:26.265 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.265 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.265 | 99.99th=[12281] 00:16:26.265 lat (msec) : >=2000=100.00% 00:16:26.265 cpu : usr=0.01%, sys=0.42%, ctx=81, majf=0, minf=13825 00:16:26.265 IO depths : 1=1.9%, 2=3.7%, 4=7.4%, 8=14.8%, 16=29.6%, 32=42.6%, >=64=0.0% 00:16:26.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.265 issued rwts: total=54,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.265 job3: (groupid=0, jobs=1): err= 0: pid=643094: Thu Oct 17 17:41:02 2024 00:16:26.265 read: IOPS=1, BW=1600KiB/s (1638kB/s)(19.0MiB/12160msec) 00:16:26.265 slat (msec): min=10, max=2125, avg=528.71, stdev=880.90 00:16:26.265 clat (msec): min=2114, max=12093, avg=7864.70, stdev=2711.87 00:16:26.265 lat (msec): min=4167, max=12159, avg=8393.41, stdev=2499.39 00:16:26.265 clat percentiles (msec): 00:16:26.265 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4178], 20.00th=[ 4245], 00:16:26.265 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8490], 60.00th=[ 8490], 00:16:26.265 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[12013], 95.00th=[12147], 00:16:26.265 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.265 | 99.99th=[12147] 00:16:26.265 lat (msec) : >=2000=100.00% 00:16:26.265 cpu : usr=0.00%, sys=0.16%, ctx=54, majf=0, minf=4865 00:16:26.265 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:16:26.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:16:26.265 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.265 job3: (groupid=0, jobs=1): err= 0: pid=643095: Thu Oct 17 17:41:02 2024 00:16:26.265 read: IOPS=4, BW=4202KiB/s (4302kB/s)(42.0MiB/10236msec) 00:16:26.265 slat (usec): min=1078, max=2093.0k, avg=240995.49, stdev=636529.68 00:16:26.265 clat (msec): min=113, max=10233, avg=7591.16, stdev=3135.02 00:16:26.265 lat (msec): min=2154, max=10235, avg=7832.16, stdev=2928.45 00:16:26.265 clat percentiles (msec): 00:16:26.265 | 1.00th=[ 114], 5.00th=[ 2165], 10.00th=[ 2198], 20.00th=[ 4329], 00:16:26.265 | 30.00th=[ 6477], 40.00th=[ 6544], 50.00th=[ 8658], 60.00th=[10134], 00:16:26.265 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:16:26.265 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:16:26.265 | 99.99th=[10268] 00:16:26.265 lat (msec) : 250=2.38%, >=2000=97.62% 00:16:26.265 cpu : usr=0.00%, sys=0.39%, ctx=61, majf=0, minf=10753 00:16:26.265 IO depths : 1=2.4%, 2=4.8%, 4=9.5%, 8=19.0%, 16=38.1%, 32=26.2%, >=64=0.0% 00:16:26.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:16:26.265 issued rwts: total=42,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.265 job3: (groupid=0, jobs=1): err= 0: pid=643096: Thu Oct 17 17:41:02 2024 00:16:26.265 read: IOPS=6, BW=6656KiB/s (6816kB/s)(80.0MiB/12307msec) 00:16:26.265 slat (usec): min=903, max=2066.2k, avg=127289.15, stdev=470560.33 00:16:26.265 clat (msec): min=2123, max=12304, avg=10697.45, stdev=2711.77 00:16:26.265 lat (msec): min=4180, max=12306, avg=10824.74, stdev=2537.62 00:16:26.265 clat percentiles (msec): 00:16:26.265 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8557], 00:16:26.265 | 30.00th=[10671], 40.00th=[12147], 50.00th=[12147], 60.00th=[12281], 00:16:26.265 | 70.00th=[12281], 80.00th=[12281], 90.00th=[12281], 95.00th=[12281], 00:16:26.265 | 99.00th=[12281], 99.50th=[12281], 99.90th=[12281], 99.95th=[12281], 00:16:26.265 | 99.99th=[12281] 00:16:26.265 lat (msec) : >=2000=100.00% 00:16:26.265 cpu : usr=0.00%, sys=0.75%, ctx=101, majf=0, minf=20481 00:16:26.265 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:16:26.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.265 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:26.265 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.265 job4: (groupid=0, jobs=1): err= 0: pid=643097: Thu Oct 17 17:41:02 2024 00:16:26.265 read: IOPS=29, BW=29.9MiB/s (31.3MB/s)(306MiB/10251msec) 00:16:26.265 slat (usec): min=90, max=2044.0k, avg=33123.17, stdev=226648.67 00:16:26.265 clat (msec): min=112, max=6440, avg=2046.18, stdev=1768.86 00:16:26.265 lat (msec): min=395, max=6443, avg=2079.30, stdev=1781.77 00:16:26.265 clat percentiles (msec): 00:16:26.265 | 1.00th=[ 393], 5.00th=[ 397], 10.00th=[ 397], 20.00th=[ 401], 00:16:26.265 | 30.00th=[ 405], 40.00th=[ 472], 50.00th=[ 1469], 60.00th=[ 3339], 00:16:26.265 | 70.00th=[ 3440], 80.00th=[ 3507], 90.00th=[ 3608], 95.00th=[ 6409], 00:16:26.265 | 99.00th=[ 6409], 99.50th=[ 6409], 99.90th=[ 6409], 99.95th=[ 6409], 00:16:26.265 | 99.99th=[ 6409] 00:16:26.265 bw ( KiB/s): min= 1477, max=290816, per=2.43%, avg=73204.20, stdev=123553.43, samples=5 00:16:26.265 iops : min= 1, max= 284, avg=71.40, stdev=120.72, samples=5 00:16:26.265 lat (msec) : 250=0.33%, 500=41.83%, 750=6.54%, 2000=2.94%, >=2000=48.37% 00:16:26.265 cpu : usr=0.01%, sys=1.40%, ctx=332, majf=0, minf=32769 00:16:26.265 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.5%, >=64=79.4% 00:16:26.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.265 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:16:26.265 issued rwts: total=306,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.265 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.266 job4: (groupid=0, jobs=1): err= 0: pid=643098: Thu Oct 17 17:41:02 2024 00:16:26.266 read: IOPS=31, BW=31.7MiB/s (33.3MB/s)(324MiB/10210msec) 00:16:26.266 slat (usec): min=65, max=2126.1k, avg=31138.32, stdev=213220.02 00:16:26.266 clat (msec): min=119, max=5441, avg=2584.71, stdev=2137.12 00:16:26.266 lat (msec): min=510, max=5445, avg=2615.85, stdev=2135.96 00:16:26.266 clat percentiles (msec): 00:16:26.266 | 1.00th=[ 514], 5.00th=[ 558], 10.00th=[ 609], 20.00th=[ 634], 00:16:26.266 | 30.00th=[ 651], 40.00th=[ 760], 50.00th=[ 827], 60.00th=[ 4279], 00:16:26.266 | 70.00th=[ 5000], 80.00th=[ 5134], 90.00th=[ 5336], 95.00th=[ 5403], 00:16:26.266 | 99.00th=[ 5470], 99.50th=[ 5470], 99.90th=[ 5470], 99.95th=[ 5470], 00:16:26.266 | 99.99th=[ 5470] 00:16:26.266 bw ( KiB/s): min= 1572, max=212992, per=1.91%, avg=57567.29, stdev=82818.54, samples=7 00:16:26.266 iops : min= 1, max= 208, avg=56.00, stdev=81.05, samples=7 00:16:26.266 lat (msec) : 250=0.31%, 750=37.96%, 1000=14.20%, 2000=1.54%, >=2000=45.99% 00:16:26.266 cpu : usr=0.03%, sys=0.95%, ctx=468, majf=0, minf=32769 00:16:26.266 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.5%, 16=4.9%, 32=9.9%, >=64=80.6% 00:16:26.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.266 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:16:26.266 issued rwts: total=324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.266 job4: (groupid=0, jobs=1): err= 0: pid=643099: Thu Oct 17 17:41:02 2024 00:16:26.266 read: IOPS=89, BW=89.1MiB/s (93.4MB/s)(904MiB/10150msec) 00:16:26.266 slat (usec): min=44, max=2051.0k, avg=11072.89, stdev=112393.19 00:16:26.266 clat (msec): min=135, max=4074, avg=918.43, stdev=875.70 00:16:26.266 lat (msec): min=164, max=4162, avg=929.50, stdev=883.90 00:16:26.266 clat percentiles (msec): 00:16:26.266 | 1.00th=[ 190], 5.00th=[ 215], 10.00th=[ 241], 20.00th=[ 264], 00:16:26.266 | 30.00th=[ 266], 40.00th=[ 275], 50.00th=[ 485], 60.00th=[ 592], 00:16:26.266 | 70.00th=[ 1028], 80.00th=[ 1938], 90.00th=[ 2366], 95.00th=[ 2467], 00:16:26.266 | 99.00th=[ 2702], 99.50th=[ 4077], 99.90th=[ 4077], 99.95th=[ 4077], 00:16:26.266 | 99.99th=[ 4077] 00:16:26.266 bw ( KiB/s): min= 1735, max=505856, per=6.61%, avg=198843.12, stdev=177580.41, samples=8 00:16:26.266 iops : min= 1, max= 494, avg=194.00, stdev=173.58, samples=8 00:16:26.266 lat (msec) : 250=12.94%, 500=41.92%, 750=7.52%, 1000=6.64%, 2000=11.84% 00:16:26.266 lat (msec) : >=2000=19.14% 00:16:26.266 cpu : usr=0.04%, sys=1.74%, ctx=986, majf=0, minf=32769 00:16:26.266 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:16:26.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.266 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.266 issued rwts: total=904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.266 job4: (groupid=0, jobs=1): err= 0: pid=643100: Thu Oct 17 17:41:02 2024 00:16:26.266 read: IOPS=46, BW=46.2MiB/s (48.4MB/s)(469MiB/10152msec) 00:16:26.266 slat (usec): min=47, max=2134.9k, avg=21377.87, stdev=176163.95 00:16:26.266 clat (msec): min=123, max=4737, avg=1728.04, stdev=1790.10 00:16:26.266 lat (msec): min=360, max=4737, avg=1749.42, stdev=1795.91 00:16:26.266 clat percentiles (msec): 00:16:26.266 | 1.00th=[ 359], 5.00th=[ 363], 10.00th=[ 368], 20.00th=[ 388], 00:16:26.266 | 30.00th=[ 489], 40.00th=[ 609], 50.00th=[ 693], 60.00th=[ 793], 00:16:26.266 | 70.00th=[ 2198], 80.00th=[ 4463], 90.00th=[ 4597], 95.00th=[ 4665], 00:16:26.266 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:16:26.266 | 99.99th=[ 4732] 00:16:26.266 bw ( KiB/s): min= 1656, max=311296, per=3.88%, avg=116670.67, stdev=129116.51, samples=6 00:16:26.266 iops : min= 1, max= 304, avg=113.83, stdev=126.20, samples=6 00:16:26.266 lat (msec) : 250=0.21%, 500=30.06%, 750=26.01%, 1000=13.01%, >=2000=30.70% 00:16:26.266 cpu : usr=0.01%, sys=0.99%, ctx=685, majf=0, minf=32769 00:16:26.266 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.4%, 32=6.8%, >=64=86.6% 00:16:26.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.266 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:16:26.266 issued rwts: total=469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.266 job4: (groupid=0, jobs=1): err= 0: pid=643101: Thu Oct 17 17:41:02 2024 00:16:26.266 read: IOPS=12, BW=12.7MiB/s (13.3MB/s)(131MiB/10300msec) 00:16:26.266 slat (usec): min=907, max=2094.0k, avg=77700.97, stdev=359342.01 00:16:26.266 clat (msec): min=119, max=10282, avg=6814.24, stdev=2795.29 00:16:26.266 lat (msec): min=2213, max=10285, avg=6891.94, stdev=2748.76 00:16:26.266 clat percentiles (msec): 00:16:26.266 | 1.00th=[ 2198], 5.00th=[ 4010], 10.00th=[ 4077], 20.00th=[ 4144], 00:16:26.266 | 30.00th=[ 4212], 40.00th=[ 4329], 50.00th=[ 6477], 60.00th=[ 8658], 00:16:26.266 | 70.00th=[10134], 80.00th=[10134], 90.00th=[10268], 95.00th=[10268], 00:16:26.266 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:16:26.266 | 99.99th=[10268] 00:16:26.266 bw ( KiB/s): min= 2043, max= 4087, per=0.10%, avg=3065.00, stdev=1445.33, samples=2 00:16:26.266 iops : min= 1, max= 3, avg= 2.00, stdev= 1.41, samples=2 00:16:26.266 lat (msec) : 250=0.76%, >=2000=99.24% 00:16:26.266 cpu : usr=0.00%, sys=1.26%, ctx=196, majf=0, minf=32769 00:16:26.266 IO depths : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.1%, 16=12.2%, 32=24.4%, >=64=51.9% 00:16:26.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.266 complete : 0=0.0%, 4=80.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=20.0% 00:16:26.266 issued rwts: total=131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.266 job4: (groupid=0, jobs=1): err= 0: pid=643102: Thu Oct 17 17:41:02 2024 00:16:26.266 read: IOPS=230, BW=231MiB/s (242MB/s)(2806MiB/12159msec) 00:16:26.266 slat (usec): min=51, max=2009.4k, avg=3561.85, stdev=38271.19 00:16:26.266 clat (msec): min=205, max=2667, avg=539.16, stdev=597.89 00:16:26.266 lat (msec): min=206, max=2671, avg=542.72, stdev=599.12 00:16:26.266 clat percentiles (msec): 00:16:26.266 | 1.00th=[ 236], 5.00th=[ 259], 10.00th=[ 266], 20.00th=[ 271], 00:16:26.266 | 30.00th=[ 288], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 376], 00:16:26.266 | 70.00th=[ 409], 80.00th=[ 456], 90.00th=[ 575], 95.00th=[ 2333], 00:16:26.266 | 99.00th=[ 2635], 99.50th=[ 2635], 99.90th=[ 2668], 99.95th=[ 2668], 00:16:26.266 | 99.99th=[ 2668] 00:16:26.266 bw ( KiB/s): min= 2043, max=497664, per=10.61%, avg=318986.12, stdev=134663.98, samples=17 00:16:26.266 iops : min= 1, max= 486, avg=311.41, stdev=131.61, samples=17 00:16:26.266 lat (msec) : 250=2.10%, 500=85.28%, 750=3.56%, >=2000=9.05% 00:16:26.266 cpu : usr=0.30%, sys=2.87%, ctx=2361, majf=0, minf=32769 00:16:26.266 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:26.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.266 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.266 issued rwts: total=2806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.266 job4: (groupid=0, jobs=1): err= 0: pid=643103: Thu Oct 17 17:41:02 2024 00:16:26.266 read: IOPS=174, BW=175MiB/s (183MB/s)(2123MiB/12156msec) 00:16:26.266 slat (usec): min=41, max=2030.4k, avg=4706.17, stdev=68030.22 00:16:26.266 clat (msec): min=131, max=6075, avg=462.98, stdev=789.74 00:16:26.266 lat (msec): min=133, max=6078, avg=467.68, stdev=799.11 00:16:26.266 clat percentiles (msec): 00:16:26.266 | 1.00th=[ 134], 5.00th=[ 136], 10.00th=[ 136], 20.00th=[ 140], 00:16:26.266 | 30.00th=[ 186], 40.00th=[ 268], 50.00th=[ 275], 60.00th=[ 288], 00:16:26.266 | 70.00th=[ 338], 80.00th=[ 384], 90.00th=[ 401], 95.00th=[ 2232], 00:16:26.266 | 99.00th=[ 4665], 99.50th=[ 6074], 99.90th=[ 6074], 99.95th=[ 6074], 00:16:26.266 | 99.99th=[ 6074] 00:16:26.266 bw ( KiB/s): min=294912, max=917504, per=16.67%, avg=501244.88, stdev=190418.20, samples=8 00:16:26.266 iops : min= 288, max= 896, avg=489.38, stdev=185.98, samples=8 00:16:26.266 lat (msec) : 250=35.70%, 500=56.24%, >=2000=8.05% 00:16:26.266 cpu : usr=0.08%, sys=2.16%, ctx=2038, majf=0, minf=32769 00:16:26.266 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:16:26.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.266 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.266 issued rwts: total=2123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.266 job4: (groupid=0, jobs=1): err= 0: pid=643104: Thu Oct 17 17:41:02 2024 00:16:26.266 read: IOPS=5, BW=5990KiB/s (6134kB/s)(71.0MiB/12138msec) 00:16:26.266 slat (usec): min=569, max=2070.6k, avg=140851.47, stdev=495014.31 00:16:26.266 clat (msec): min=2137, max=12135, avg=8661.36, stdev=3251.96 00:16:26.266 lat (msec): min=2149, max=12137, avg=8802.21, stdev=3181.15 00:16:26.266 clat percentiles (msec): 00:16:26.266 | 1.00th=[ 2140], 5.00th=[ 2165], 10.00th=[ 4245], 20.00th=[ 6342], 00:16:26.266 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:16:26.266 | 70.00th=[12013], 80.00th=[12013], 90.00th=[12147], 95.00th=[12147], 00:16:26.266 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12147], 99.95th=[12147], 00:16:26.266 | 99.99th=[12147] 00:16:26.266 lat (msec) : >=2000=100.00% 00:16:26.266 cpu : usr=0.00%, sys=0.55%, ctx=59, majf=0, minf=18177 00:16:26.266 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.5%, 32=45.1%, >=64=11.3% 00:16:26.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.266 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:26.266 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.266 job4: (groupid=0, jobs=1): err= 0: pid=643105: Thu Oct 17 17:41:02 2024 00:16:26.266 read: IOPS=17, BW=17.3MiB/s (18.2MB/s)(178MiB/10270msec) 00:16:26.266 slat (usec): min=504, max=3367.4k, avg=57015.52, stdev=343947.58 00:16:26.266 clat (msec): min=119, max=6530, avg=3705.95, stdev=1416.38 00:16:26.266 lat (msec): min=761, max=6537, avg=3762.96, stdev=1388.44 00:16:26.266 clat percentiles (msec): 00:16:26.266 | 1.00th=[ 760], 5.00th=[ 768], 10.00th=[ 802], 20.00th=[ 3507], 00:16:26.266 | 30.00th=[ 3608], 40.00th=[ 3708], 50.00th=[ 3809], 60.00th=[ 3943], 00:16:26.266 | 70.00th=[ 4044], 80.00th=[ 4144], 90.00th=[ 5067], 95.00th=[ 6544], 00:16:26.266 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:16:26.266 | 99.99th=[ 6544] 00:16:26.266 bw ( KiB/s): min= 1440, max=100352, per=1.15%, avg=34613.33, stdev=56932.17, samples=3 00:16:26.266 iops : min= 1, max= 98, avg=33.67, stdev=55.72, samples=3 00:16:26.266 lat (msec) : 250=0.56%, 1000=12.36%, >=2000=87.08% 00:16:26.266 cpu : usr=0.02%, sys=1.10%, ctx=332, majf=0, minf=32769 00:16:26.266 IO depths : 1=0.6%, 2=1.1%, 4=2.2%, 8=4.5%, 16=9.0%, 32=18.0%, >=64=64.6% 00:16:26.266 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.266 complete : 0=0.0%, 4=98.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.9% 00:16:26.266 issued rwts: total=178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.266 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.266 job4: (groupid=0, jobs=1): err= 0: pid=643106: Thu Oct 17 17:41:02 2024 00:16:26.266 read: IOPS=8, BW=8446KiB/s (8649kB/s)(85.0MiB/10305msec) 00:16:26.266 slat (usec): min=737, max=2066.8k, avg=119785.09, stdev=455578.26 00:16:26.267 clat (msec): min=122, max=10301, avg=8550.55, stdev=2776.13 00:16:26.267 lat (msec): min=2174, max=10304, avg=8670.33, stdev=2623.61 00:16:26.267 clat percentiles (msec): 00:16:26.267 | 1.00th=[ 123], 5.00th=[ 2232], 10.00th=[ 4329], 20.00th=[ 6477], 00:16:26.267 | 30.00th=[ 8658], 40.00th=[10000], 50.00th=[10134], 60.00th=[10268], 00:16:26.267 | 70.00th=[10268], 80.00th=[10268], 90.00th=[10268], 95.00th=[10268], 00:16:26.267 | 99.00th=[10268], 99.50th=[10268], 99.90th=[10268], 99.95th=[10268], 00:16:26.267 | 99.99th=[10268] 00:16:26.267 lat (msec) : 250=1.18%, >=2000=98.82% 00:16:26.267 cpu : usr=0.00%, sys=0.93%, ctx=104, majf=0, minf=21761 00:16:26.267 IO depths : 1=1.2%, 2=2.4%, 4=4.7%, 8=9.4%, 16=18.8%, 32=37.6%, >=64=25.9% 00:16:26.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.267 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:16:26.267 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.267 job4: (groupid=0, jobs=1): err= 0: pid=643107: Thu Oct 17 17:41:02 2024 00:16:26.267 read: IOPS=106, BW=107MiB/s (112MB/s)(1086MiB/10195msec) 00:16:26.267 slat (usec): min=38, max=2090.3k, avg=9257.84, stdev=99422.20 00:16:26.267 clat (msec): min=135, max=4368, avg=743.85, stdev=695.66 00:16:26.267 lat (msec): min=195, max=5223, avg=753.11, stdev=711.98 00:16:26.267 clat percentiles (msec): 00:16:26.267 | 1.00th=[ 201], 5.00th=[ 230], 10.00th=[ 245], 20.00th=[ 266], 00:16:26.267 | 30.00th=[ 268], 40.00th=[ 279], 50.00th=[ 550], 60.00th=[ 651], 00:16:26.267 | 70.00th=[ 793], 80.00th=[ 1028], 90.00th=[ 2299], 95.00th=[ 2366], 00:16:26.267 | 99.00th=[ 2467], 99.50th=[ 2903], 99.90th=[ 4396], 99.95th=[ 4396], 00:16:26.267 | 99.99th=[ 4396] 00:16:26.267 bw ( KiB/s): min= 1615, max=492582, per=6.53%, avg=196260.80, stdev=181958.53, samples=10 00:16:26.267 iops : min= 1, max= 481, avg=191.50, stdev=177.87, samples=10 00:16:26.267 lat (msec) : 250=11.97%, 500=35.17%, 750=20.17%, 1000=9.02%, 2000=11.23% 00:16:26.267 lat (msec) : >=2000=12.43% 00:16:26.267 cpu : usr=0.01%, sys=1.50%, ctx=1373, majf=0, minf=32769 00:16:26.267 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:16:26.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.267 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.267 issued rwts: total=1086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.267 job4: (groupid=0, jobs=1): err= 0: pid=643108: Thu Oct 17 17:41:02 2024 00:16:26.267 read: IOPS=161, BW=162MiB/s (170MB/s)(1623MiB/10039msec) 00:16:26.267 slat (usec): min=37, max=2049.8k, avg=6157.05, stdev=78893.16 00:16:26.267 clat (msec): min=37, max=6224, avg=468.18, stdev=870.40 00:16:26.267 lat (msec): min=39, max=6227, avg=474.34, stdev=882.25 00:16:26.267 clat percentiles (msec): 00:16:26.267 | 1.00th=[ 75], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 155], 00:16:26.267 | 30.00th=[ 232], 40.00th=[ 279], 50.00th=[ 317], 60.00th=[ 351], 00:16:26.267 | 70.00th=[ 409], 80.00th=[ 498], 90.00th=[ 558], 95.00th=[ 584], 00:16:26.267 | 99.00th=[ 6208], 99.50th=[ 6208], 99.90th=[ 6208], 99.95th=[ 6208], 00:16:26.267 | 99.99th=[ 6208] 00:16:26.267 bw ( KiB/s): min=161469, max=843776, per=12.40%, avg=372792.43, stdev=224502.47, samples=7 00:16:26.267 iops : min= 157, max= 824, avg=363.86, stdev=219.36, samples=7 00:16:26.267 lat (msec) : 50=0.49%, 100=1.11%, 250=31.24%, 500=48.00%, 750=15.90% 00:16:26.267 lat (msec) : >=2000=3.27% 00:16:26.267 cpu : usr=0.08%, sys=2.41%, ctx=1507, majf=0, minf=32769 00:16:26.267 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.1% 00:16:26.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.267 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.267 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.267 job4: (groupid=0, jobs=1): err= 0: pid=643109: Thu Oct 17 17:41:02 2024 00:16:26.267 read: IOPS=90, BW=90.7MiB/s (95.1MB/s)(930MiB/10256msec) 00:16:26.267 slat (usec): min=51, max=2062.6k, avg=10896.55, stdev=106496.91 00:16:26.267 clat (msec): min=116, max=4734, avg=1055.77, stdev=1142.55 00:16:26.267 lat (msec): min=124, max=4738, avg=1066.66, stdev=1150.77 00:16:26.267 clat percentiles (msec): 00:16:26.267 | 1.00th=[ 125], 5.00th=[ 126], 10.00th=[ 127], 20.00th=[ 128], 00:16:26.267 | 30.00th=[ 266], 40.00th=[ 567], 50.00th=[ 617], 60.00th=[ 869], 00:16:26.267 | 70.00th=[ 1167], 80.00th=[ 1737], 90.00th=[ 2123], 95.00th=[ 3507], 00:16:26.267 | 99.00th=[ 4665], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:16:26.267 | 99.99th=[ 4732] 00:16:26.267 bw ( KiB/s): min= 6144, max=555008, per=7.80%, avg=234519.43, stdev=179042.62, samples=7 00:16:26.267 iops : min= 6, max= 542, avg=228.86, stdev=174.87, samples=7 00:16:26.267 lat (msec) : 250=29.46%, 500=6.56%, 750=20.22%, 1000=8.60%, 2000=21.83% 00:16:26.267 lat (msec) : >=2000=13.33% 00:16:26.267 cpu : usr=0.05%, sys=1.72%, ctx=1235, majf=0, minf=32769 00:16:26.267 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.4%, >=64=93.2% 00:16:26.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.267 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.267 issued rwts: total=930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.267 job5: (groupid=0, jobs=1): err= 0: pid=643110: Thu Oct 17 17:41:02 2024 00:16:26.267 read: IOPS=56, BW=56.9MiB/s (59.7MB/s)(692MiB/12161msec) 00:16:26.267 slat (usec): min=51, max=2113.6k, avg=14513.87, stdev=136162.42 00:16:26.267 clat (msec): min=412, max=7214, avg=2160.11, stdev=2304.77 00:16:26.267 lat (msec): min=415, max=7216, avg=2174.62, stdev=2310.51 00:16:26.267 clat percentiles (msec): 00:16:26.267 | 1.00th=[ 414], 5.00th=[ 426], 10.00th=[ 435], 20.00th=[ 464], 00:16:26.267 | 30.00th=[ 550], 40.00th=[ 785], 50.00th=[ 827], 60.00th=[ 877], 00:16:26.267 | 70.00th=[ 2735], 80.00th=[ 2903], 90.00th=[ 6745], 95.00th=[ 7013], 00:16:26.267 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7215], 99.95th=[ 7215], 00:16:26.267 | 99.99th=[ 7215] 00:16:26.267 bw ( KiB/s): min= 1706, max=239616, per=4.27%, avg=128482.22, stdev=80405.04, samples=9 00:16:26.267 iops : min= 1, max= 234, avg=125.22, stdev=78.75, samples=9 00:16:26.267 lat (msec) : 500=26.01%, 750=5.49%, 1000=30.64%, >=2000=37.86% 00:16:26.267 cpu : usr=0.02%, sys=1.05%, ctx=1585, majf=0, minf=32769 00:16:26.267 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:16:26.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.267 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:26.267 issued rwts: total=692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.267 job5: (groupid=0, jobs=1): err= 0: pid=643111: Thu Oct 17 17:41:02 2024 00:16:26.267 read: IOPS=142, BW=143MiB/s (150MB/s)(1736MiB/12150msec) 00:16:26.267 slat (usec): min=35, max=2130.0k, avg=5756.25, stdev=70217.26 00:16:26.267 clat (msec): min=218, max=4653, avg=869.16, stdev=1184.49 00:16:26.267 lat (msec): min=220, max=4655, avg=874.92, stdev=1187.97 00:16:26.267 clat percentiles (msec): 00:16:26.267 | 1.00th=[ 247], 5.00th=[ 251], 10.00th=[ 253], 20.00th=[ 259], 00:16:26.267 | 30.00th=[ 338], 40.00th=[ 397], 50.00th=[ 414], 60.00th=[ 443], 00:16:26.267 | 70.00th=[ 550], 80.00th=[ 625], 90.00th=[ 2802], 95.00th=[ 4329], 00:16:26.267 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665], 00:16:26.267 | 99.99th=[ 4665] 00:16:26.267 bw ( KiB/s): min= 1735, max=503808, per=7.83%, avg=235285.86, stdev=160774.77, samples=14 00:16:26.267 iops : min= 1, max= 492, avg=229.64, stdev=157.09, samples=14 00:16:26.267 lat (msec) : 250=4.49%, 500=62.44%, 750=17.80%, 1000=0.58%, >=2000=14.69% 00:16:26.267 cpu : usr=0.04%, sys=1.91%, ctx=1635, majf=0, minf=32769 00:16:26.267 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.8%, >=64=96.4% 00:16:26.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.267 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.267 issued rwts: total=1736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.267 job5: (groupid=0, jobs=1): err= 0: pid=643112: Thu Oct 17 17:41:02 2024 00:16:26.267 read: IOPS=226, BW=226MiB/s (237MB/s)(2271MiB/10037msec) 00:16:26.267 slat (usec): min=40, max=1468.1k, avg=4399.12, stdev=38376.34 00:16:26.267 clat (msec): min=36, max=2826, avg=537.87, stdev=472.45 00:16:26.267 lat (msec): min=37, max=2830, avg=542.26, stdev=474.27 00:16:26.267 clat percentiles (msec): 00:16:26.267 | 1.00th=[ 90], 5.00th=[ 150], 10.00th=[ 197], 20.00th=[ 262], 00:16:26.267 | 30.00th=[ 279], 40.00th=[ 351], 50.00th=[ 405], 60.00th=[ 439], 00:16:26.267 | 70.00th=[ 527], 80.00th=[ 642], 90.00th=[ 1418], 95.00th=[ 1972], 00:16:26.267 | 99.00th=[ 2072], 99.50th=[ 2072], 99.90th=[ 2836], 99.95th=[ 2836], 00:16:26.267 | 99.99th=[ 2836] 00:16:26.267 bw ( KiB/s): min=30720, max=661504, per=8.73%, avg=262469.53, stdev=151771.54, samples=15 00:16:26.267 iops : min= 30, max= 646, avg=256.20, stdev=148.28, samples=15 00:16:26.267 lat (msec) : 50=0.26%, 100=0.92%, 250=15.98%, 500=48.97%, 750=22.68% 00:16:26.267 lat (msec) : 2000=7.22%, >=2000=3.96% 00:16:26.267 cpu : usr=0.11%, sys=2.42%, ctx=3153, majf=0, minf=32769 00:16:26.267 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:16:26.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.267 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.267 job5: (groupid=0, jobs=1): err= 0: pid=643113: Thu Oct 17 17:41:02 2024 00:16:26.267 read: IOPS=97, BW=97.9MiB/s (103MB/s)(1008MiB/10297msec) 00:16:26.267 slat (usec): min=41, max=2156.2k, avg=10099.73, stdev=104070.14 00:16:26.267 clat (msec): min=112, max=4782, avg=1132.21, stdev=1360.93 00:16:26.267 lat (msec): min=370, max=4782, avg=1142.31, stdev=1364.49 00:16:26.267 clat percentiles (msec): 00:16:26.267 | 1.00th=[ 372], 5.00th=[ 384], 10.00th=[ 418], 20.00th=[ 443], 00:16:26.267 | 30.00th=[ 468], 40.00th=[ 485], 50.00th=[ 527], 60.00th=[ 651], 00:16:26.267 | 70.00th=[ 768], 80.00th=[ 818], 90.00th=[ 4463], 95.00th=[ 4665], 00:16:26.267 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4799], 99.95th=[ 4799], 00:16:26.267 | 99.99th=[ 4799] 00:16:26.267 bw ( KiB/s): min= 6131, max=302499, per=6.66%, avg=200156.11, stdev=104760.39, samples=9 00:16:26.267 iops : min= 5, max= 295, avg=195.22, stdev=102.57, samples=9 00:16:26.267 lat (msec) : 250=0.10%, 500=46.73%, 750=20.63%, 1000=15.48%, >=2000=17.06% 00:16:26.267 cpu : usr=0.02%, sys=1.62%, ctx=1131, majf=0, minf=32769 00:16:26.267 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.8% 00:16:26.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.267 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.267 issued rwts: total=1008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.268 job5: (groupid=0, jobs=1): err= 0: pid=643114: Thu Oct 17 17:41:02 2024 00:16:26.268 read: IOPS=65, BW=65.1MiB/s (68.3MB/s)(668MiB/10258msec) 00:16:26.268 slat (usec): min=58, max=2153.9k, avg=15182.65, stdev=133005.78 00:16:26.268 clat (msec): min=112, max=3867, avg=1452.33, stdev=1033.91 00:16:26.268 lat (msec): min=320, max=3869, avg=1467.52, stdev=1037.35 00:16:26.268 clat percentiles (msec): 00:16:26.268 | 1.00th=[ 326], 5.00th=[ 368], 10.00th=[ 409], 20.00th=[ 667], 00:16:26.268 | 30.00th=[ 735], 40.00th=[ 802], 50.00th=[ 827], 60.00th=[ 1418], 00:16:26.268 | 70.00th=[ 1737], 80.00th=[ 2903], 90.00th=[ 2937], 95.00th=[ 3004], 00:16:26.268 | 99.00th=[ 3876], 99.50th=[ 3876], 99.90th=[ 3876], 99.95th=[ 3876], 00:16:26.268 | 99.99th=[ 3876] 00:16:26.268 bw ( KiB/s): min= 1462, max=206848, per=4.09%, avg=123021.44, stdev=70942.57, samples=9 00:16:26.268 iops : min= 1, max= 202, avg=120.00, stdev=69.41, samples=9 00:16:26.268 lat (msec) : 250=0.15%, 500=12.43%, 750=19.16%, 1000=23.35%, 2000=19.01% 00:16:26.268 lat (msec) : >=2000=25.90% 00:16:26.268 cpu : usr=0.03%, sys=1.60%, ctx=1427, majf=0, minf=32769 00:16:26.268 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.6% 00:16:26.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.268 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:26.268 issued rwts: total=668,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.268 job5: (groupid=0, jobs=1): err= 0: pid=643115: Thu Oct 17 17:41:02 2024 00:16:26.268 read: IOPS=80, BW=80.4MiB/s (84.3MB/s)(987MiB/12271msec) 00:16:26.268 slat (usec): min=44, max=2089.1k, avg=10250.01, stdev=105649.72 00:16:26.268 clat (msec): min=309, max=7054, avg=1540.77, stdev=2002.52 00:16:26.268 lat (msec): min=311, max=7067, avg=1551.02, stdev=2008.84 00:16:26.268 clat percentiles (msec): 00:16:26.268 | 1.00th=[ 313], 5.00th=[ 347], 10.00th=[ 388], 20.00th=[ 405], 00:16:26.268 | 30.00th=[ 418], 40.00th=[ 468], 50.00th=[ 609], 60.00th=[ 726], 00:16:26.268 | 70.00th=[ 760], 80.00th=[ 2265], 90.00th=[ 6477], 95.00th=[ 6812], 00:16:26.268 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7080], 99.95th=[ 7080], 00:16:26.268 | 99.99th=[ 7080] 00:16:26.268 bw ( KiB/s): min= 1440, max=367904, per=5.32%, avg=159976.27, stdev=117019.97, samples=11 00:16:26.268 iops : min= 1, max= 359, avg=156.09, stdev=114.33, samples=11 00:16:26.268 lat (msec) : 500=44.07%, 750=24.01%, 1000=4.05%, 2000=0.30%, >=2000=27.56% 00:16:26.268 cpu : usr=0.04%, sys=1.56%, ctx=1412, majf=0, minf=32769 00:16:26.268 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.6% 00:16:26.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.268 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.268 issued rwts: total=987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.268 job5: (groupid=0, jobs=1): err= 0: pid=643116: Thu Oct 17 17:41:02 2024 00:16:26.268 read: IOPS=116, BW=116MiB/s (122MB/s)(1191MiB/10227msec) 00:16:26.268 slat (usec): min=36, max=2153.9k, avg=8487.61, stdev=98270.47 00:16:26.268 clat (msec): min=112, max=3029, avg=1059.34, stdev=978.31 00:16:26.268 lat (msec): min=235, max=3032, avg=1067.83, stdev=980.37 00:16:26.268 clat percentiles (msec): 00:16:26.268 | 1.00th=[ 241], 5.00th=[ 257], 10.00th=[ 275], 20.00th=[ 326], 00:16:26.268 | 30.00th=[ 376], 40.00th=[ 401], 50.00th=[ 426], 60.00th=[ 667], 00:16:26.268 | 70.00th=[ 1687], 80.00th=[ 2601], 90.00th=[ 2702], 95.00th=[ 2836], 00:16:26.268 | 99.00th=[ 2970], 99.50th=[ 3004], 99.90th=[ 3004], 99.95th=[ 3037], 00:16:26.268 | 99.99th=[ 3037] 00:16:26.268 bw ( KiB/s): min= 1532, max=458752, per=5.57%, avg=167552.92, stdev=146009.60, samples=13 00:16:26.268 iops : min= 1, max= 448, avg=163.54, stdev=142.63, samples=13 00:16:26.268 lat (msec) : 250=2.02%, 500=54.58%, 750=4.87%, 1000=6.55%, 2000=10.66% 00:16:26.268 lat (msec) : >=2000=21.33% 00:16:26.268 cpu : usr=0.06%, sys=1.99%, ctx=2113, majf=0, minf=32769 00:16:26.268 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:16:26.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.268 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.268 issued rwts: total=1191,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.268 job5: (groupid=0, jobs=1): err= 0: pid=643117: Thu Oct 17 17:41:02 2024 00:16:26.268 read: IOPS=186, BW=186MiB/s (195MB/s)(1867MiB/10023msec) 00:16:26.268 slat (usec): min=39, max=2167.3k, avg=5353.06, stdev=60663.75 00:16:26.268 clat (msec): min=22, max=2969, avg=553.10, stdev=630.84 00:16:26.268 lat (msec): min=23, max=2972, avg=558.45, stdev=634.53 00:16:26.268 clat percentiles (msec): 00:16:26.268 | 1.00th=[ 53], 5.00th=[ 199], 10.00th=[ 257], 20.00th=[ 259], 00:16:26.268 | 30.00th=[ 266], 40.00th=[ 288], 50.00th=[ 305], 60.00th=[ 447], 00:16:26.268 | 70.00th=[ 506], 80.00th=[ 592], 90.00th=[ 844], 95.00th=[ 2702], 00:16:26.268 | 99.00th=[ 2903], 99.50th=[ 2937], 99.90th=[ 2970], 99.95th=[ 2970], 00:16:26.268 | 99.99th=[ 2970] 00:16:26.268 bw ( KiB/s): min=57344, max=495616, per=8.50%, avg=255416.42, stdev=140424.03, samples=12 00:16:26.268 iops : min= 56, max= 484, avg=249.42, stdev=137.11, samples=12 00:16:26.268 lat (msec) : 50=0.37%, 100=2.04%, 250=4.77%, 500=60.85%, 750=19.23% 00:16:26.268 lat (msec) : 1000=5.84%, >=2000=6.91% 00:16:26.268 cpu : usr=0.13%, sys=1.85%, ctx=3152, majf=0, minf=32769 00:16:26.268 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:26.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.268 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.268 issued rwts: total=1867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.268 job5: (groupid=0, jobs=1): err= 0: pid=643118: Thu Oct 17 17:41:02 2024 00:16:26.268 read: IOPS=73, BW=73.4MiB/s (77.0MB/s)(749MiB/10205msec) 00:16:26.268 slat (usec): min=65, max=1987.5k, avg=13467.77, stdev=100901.15 00:16:26.268 clat (msec): min=112, max=3060, avg=1407.05, stdev=841.02 00:16:26.268 lat (msec): min=309, max=4241, avg=1420.51, stdev=846.35 00:16:26.268 clat percentiles (msec): 00:16:26.268 | 1.00th=[ 334], 5.00th=[ 439], 10.00th=[ 625], 20.00th=[ 751], 00:16:26.268 | 30.00th=[ 844], 40.00th=[ 885], 50.00th=[ 1167], 60.00th=[ 1284], 00:16:26.268 | 70.00th=[ 1804], 80.00th=[ 1871], 90.00th=[ 3004], 95.00th=[ 3004], 00:16:26.268 | 99.00th=[ 3037], 99.50th=[ 3037], 99.90th=[ 3071], 99.95th=[ 3071], 00:16:26.268 | 99.99th=[ 3071] 00:16:26.268 bw ( KiB/s): min= 1582, max=268288, per=3.53%, avg=106079.25, stdev=76792.42, samples=12 00:16:26.268 iops : min= 1, max= 262, avg=103.42, stdev=75.06, samples=12 00:16:26.268 lat (msec) : 250=0.13%, 500=6.68%, 750=12.82%, 1000=28.84%, 2000=33.51% 00:16:26.268 lat (msec) : >=2000=18.02% 00:16:26.268 cpu : usr=0.03%, sys=1.30%, ctx=1447, majf=0, minf=32769 00:16:26.268 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:16:26.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.268 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:26.268 issued rwts: total=749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.268 job5: (groupid=0, jobs=1): err= 0: pid=643119: Thu Oct 17 17:41:02 2024 00:16:26.268 read: IOPS=192, BW=192MiB/s (202MB/s)(2333MiB/12140msec) 00:16:26.268 slat (usec): min=44, max=2122.1k, avg=4281.39, stdev=67292.35 00:16:26.268 clat (msec): min=128, max=4639, avg=590.00, stdev=1060.71 00:16:26.268 lat (msec): min=129, max=4641, avg=594.28, stdev=1064.53 00:16:26.268 clat percentiles (msec): 00:16:26.268 | 1.00th=[ 130], 5.00th=[ 131], 10.00th=[ 132], 20.00th=[ 133], 00:16:26.268 | 30.00th=[ 134], 40.00th=[ 136], 50.00th=[ 136], 60.00th=[ 140], 00:16:26.268 | 70.00th=[ 355], 80.00th=[ 502], 90.00th=[ 2433], 95.00th=[ 2769], 00:16:26.268 | 99.00th=[ 4530], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665], 00:16:26.268 | 99.99th=[ 4665] 00:16:26.268 bw ( KiB/s): min=24845, max=982957, per=13.65%, avg=410489.45, stdev=375069.22, samples=11 00:16:26.268 iops : min= 24, max= 959, avg=400.73, stdev=366.17, samples=11 00:16:26.268 lat (msec) : 250=64.38%, 500=15.26%, 750=7.80%, 2000=1.07%, >=2000=11.49% 00:16:26.268 cpu : usr=0.09%, sys=2.41%, ctx=2057, majf=0, minf=32769 00:16:26.268 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:16:26.268 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.268 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.268 issued rwts: total=2333,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.268 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.268 job5: (groupid=0, jobs=1): err= 0: pid=643120: Thu Oct 17 17:41:02 2024 00:16:26.268 read: IOPS=53, BW=53.5MiB/s (56.1MB/s)(547MiB/10230msec) 00:16:26.268 slat (usec): min=66, max=2049.3k, avg=18490.29, stdev=153775.20 00:16:26.268 clat (msec): min=112, max=4376, avg=1669.05, stdev=1488.24 00:16:26.268 lat (msec): min=273, max=4382, avg=1687.54, stdev=1492.13 00:16:26.268 clat percentiles (msec): 00:16:26.268 | 1.00th=[ 275], 5.00th=[ 279], 10.00th=[ 418], 20.00th=[ 667], 00:16:26.268 | 30.00th=[ 743], 40.00th=[ 810], 50.00th=[ 844], 60.00th=[ 927], 00:16:26.268 | 70.00th=[ 1905], 80.00th=[ 3977], 90.00th=[ 4077], 95.00th=[ 4144], 00:16:26.268 | 99.00th=[ 4329], 99.50th=[ 4329], 99.90th=[ 4396], 99.95th=[ 4396], 00:16:26.268 | 99.99th=[ 4396] 00:16:26.268 bw ( KiB/s): min= 1526, max=311296, per=3.57%, avg=107454.75, stdev=109880.80, samples=8 00:16:26.268 iops : min= 1, max= 304, avg=104.88, stdev=107.37, samples=8 00:16:26.268 lat (msec) : 250=0.18%, 500=12.43%, 750=18.83%, 1000=37.48%, 2000=1.28% 00:16:26.268 lat (msec) : >=2000=29.80% 00:16:26.268 cpu : usr=0.00%, sys=1.58%, ctx=799, majf=0, minf=32769 00:16:26.268 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=2.9%, 32=5.9%, >=64=88.5% 00:16:26.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.269 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:16:26.269 issued rwts: total=547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.269 job5: (groupid=0, jobs=1): err= 0: pid=643121: Thu Oct 17 17:41:02 2024 00:16:26.269 read: IOPS=199, BW=199MiB/s (209MB/s)(2001MiB/10031msec) 00:16:26.269 slat (usec): min=44, max=2065.2k, avg=4996.61, stdev=56871.00 00:16:26.269 clat (msec): min=24, max=3148, avg=528.83, stdev=673.27 00:16:26.269 lat (msec): min=39, max=3150, avg=533.82, stdev=677.33 00:16:26.269 clat percentiles (msec): 00:16:26.269 | 1.00th=[ 63], 5.00th=[ 138], 10.00th=[ 140], 20.00th=[ 140], 00:16:26.269 | 30.00th=[ 153], 40.00th=[ 241], 50.00th=[ 257], 60.00th=[ 259], 00:16:26.269 | 70.00th=[ 542], 80.00th=[ 852], 90.00th=[ 927], 95.00th=[ 2601], 00:16:26.269 | 99.00th=[ 3071], 99.50th=[ 3138], 99.90th=[ 3138], 99.95th=[ 3138], 00:16:26.269 | 99.99th=[ 3138] 00:16:26.269 bw ( KiB/s): min=49152, max=548864, per=8.17%, avg=245666.83, stdev=175836.32, samples=12 00:16:26.269 iops : min= 48, max= 536, avg=239.75, stdev=171.80, samples=12 00:16:26.269 lat (msec) : 50=0.40%, 100=2.40%, 250=44.73%, 500=21.19%, 750=8.55% 00:16:26.269 lat (msec) : 1000=15.64%, 2000=0.05%, >=2000=7.05% 00:16:26.269 cpu : usr=0.10%, sys=2.31%, ctx=3417, majf=0, minf=32769 00:16:26.269 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:26.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.269 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.269 issued rwts: total=2001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.269 job5: (groupid=0, jobs=1): err= 0: pid=643122: Thu Oct 17 17:41:02 2024 00:16:26.269 read: IOPS=280, BW=280MiB/s (294MB/s)(2814MiB/10034msec) 00:16:26.269 slat (usec): min=66, max=2114.0k, avg=3549.07, stdev=61979.78 00:16:26.269 clat (msec): min=33, max=4641, avg=407.68, stdev=902.57 00:16:26.269 lat (msec): min=34, max=4644, avg=411.23, stdev=906.44 00:16:26.269 clat percentiles (msec): 00:16:26.269 | 1.00th=[ 97], 5.00th=[ 136], 10.00th=[ 136], 20.00th=[ 136], 00:16:26.269 | 30.00th=[ 136], 40.00th=[ 138], 50.00th=[ 140], 60.00th=[ 142], 00:16:26.269 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 288], 95.00th=[ 2299], 00:16:26.269 | 99.00th=[ 4597], 99.50th=[ 4597], 99.90th=[ 4665], 99.95th=[ 4665], 00:16:26.269 | 99.99th=[ 4665] 00:16:26.269 bw ( KiB/s): min=34816, max=960512, per=18.29%, avg=549956.90, stdev=305478.72, samples=10 00:16:26.269 iops : min= 34, max= 938, avg=537.00, stdev=298.33, samples=10 00:16:26.269 lat (msec) : 50=0.28%, 100=0.78%, 250=64.78%, 500=27.08%, 2000=1.95% 00:16:26.269 lat (msec) : >=2000=5.12% 00:16:26.269 cpu : usr=0.14%, sys=3.64%, ctx=2485, majf=0, minf=32769 00:16:26.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:16:26.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:26.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:26.269 issued rwts: total=2814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:26.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:26.269 00:16:26.269 Run status group 0 (all jobs): 00:16:26.269 READ: bw=2936MiB/s (3079MB/s), 669KiB/s-280MiB/s (686kB/s-294MB/s), io=35.5GiB (38.1GB), run=10023-12372msec 00:16:26.269 00:16:26.269 Disk stats (read/write): 00:16:26.269 nvme0n1: ios=13295/0, merge=0/0, ticks=11744159/0, in_queue=11744159, util=98.36% 00:16:26.269 nvme1n1: ios=25587/0, merge=0/0, ticks=10940749/0, in_queue=10940749, util=98.25% 00:16:26.269 nvme2n1: ios=4204/0, merge=0/0, ticks=8822939/0, in_queue=8822939, util=98.67% 00:16:26.269 nvme3n1: ios=6984/0, merge=0/0, ticks=8729280/0, in_queue=8729280, util=98.48% 00:16:26.269 nvme4n1: ios=88059/0, merge=0/0, ticks=10572292/0, in_queue=10572292, util=99.00% 00:16:26.269 nvme5n1: ios=150849/0, merge=0/0, ticks=9777403/0, in_queue=9777403, util=99.30% 00:16:26.269 17:41:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:16:26.269 17:41:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:16:26.269 17:41:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:26.269 17:41:02 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:16:27.645 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.645 17:41:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:16:27.645 17:41:05 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:16:27.645 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:27.645 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:16:27.645 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:27.645 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:16:27.904 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:16:27.904 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:27.904 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.904 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:27.904 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.904 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:27.904 17:41:06 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:31.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:31.191 17:41:09 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:34.478 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:34.478 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:16:34.478 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:16:34.478 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:34.478 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:16:34.478 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:34.478 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:16:34.478 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:16:34.478 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:34.479 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.479 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:34.479 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.479 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:34.479 17:41:12 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:37.764 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:37.764 17:41:15 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:41.048 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:16:41.048 17:41:19 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:44.335 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # sync 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set +e 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:44.335 rmmod nvme_rdma 00:16:44.335 rmmod nvme_fabrics 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@128 -- # set -e 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@129 -- # return 0 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@515 -- # '[' -n 641416 ']' 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@516 -- # killprocess 641416 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 641416 ']' 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 641416 00:16:44.335 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:16:44.336 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:44.336 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 641416 00:16:44.336 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:44.336 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:44.336 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 641416' 00:16:44.336 killing process with pid 641416 00:16:44.336 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 641416 00:16:44.336 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 641416 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:16:44.594 00:16:44.594 real 0m50.163s 00:16:44.594 user 2m58.056s 00:16:44.594 sys 0m16.256s 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:16:44.594 ************************************ 00:16:44.594 END TEST nvmf_srq_overwhelm 00:16:44.594 ************************************ 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:44.594 ************************************ 00:16:44.594 START TEST nvmf_shutdown 00:16:44.594 ************************************ 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:16:44.594 * Looking for test storage... 00:16:44.594 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:16:44.594 17:41:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:16:44.853 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:44.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.854 --rc genhtml_branch_coverage=1 00:16:44.854 --rc genhtml_function_coverage=1 00:16:44.854 --rc genhtml_legend=1 00:16:44.854 --rc geninfo_all_blocks=1 00:16:44.854 --rc geninfo_unexecuted_blocks=1 00:16:44.854 00:16:44.854 ' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:44.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.854 --rc genhtml_branch_coverage=1 00:16:44.854 --rc genhtml_function_coverage=1 00:16:44.854 --rc genhtml_legend=1 00:16:44.854 --rc geninfo_all_blocks=1 00:16:44.854 --rc geninfo_unexecuted_blocks=1 00:16:44.854 00:16:44.854 ' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:44.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.854 --rc genhtml_branch_coverage=1 00:16:44.854 --rc genhtml_function_coverage=1 00:16:44.854 --rc genhtml_legend=1 00:16:44.854 --rc geninfo_all_blocks=1 00:16:44.854 --rc geninfo_unexecuted_blocks=1 00:16:44.854 00:16:44.854 ' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:44.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:44.854 --rc genhtml_branch_coverage=1 00:16:44.854 --rc genhtml_function_coverage=1 00:16:44.854 --rc genhtml_legend=1 00:16:44.854 --rc geninfo_all_blocks=1 00:16:44.854 --rc geninfo_unexecuted_blocks=1 00:16:44.854 00:16:44.854 ' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:44.854 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:44.854 ************************************ 00:16:44.854 START TEST nvmf_shutdown_tc1 00:16:44.854 ************************************ 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:16:44.854 17:41:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.507 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:16:51.508 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:16:51.508 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:51.508 Found net devices under 0000:18:00.0: mlx_0_0 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:51.508 Found net devices under 0000:18:00.1: mlx_0_1 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # rdma_device_init 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # uname 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:51.508 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:51.508 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:16:51.508 altname enp24s0f0np0 00:16:51.508 altname ens785f0np0 00:16:51.508 inet 192.168.100.8/24 scope global mlx_0_0 00:16:51.508 valid_lft forever preferred_lft forever 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:51.508 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:51.509 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:51.509 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:16:51.509 altname enp24s0f1np1 00:16:51.509 altname ens785f1np1 00:16:51.509 inet 192.168.100.9/24 scope global mlx_0_1 00:16:51.509 valid_lft forever preferred_lft forever 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@109 -- # continue 2 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:16:51.509 192.168.100.9' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:16:51.509 192.168.100.9' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # head -n 1 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # head -n 1 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:16:51.509 192.168.100.9' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # tail -n +2 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=650237 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 650237 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 650237 ']' 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 [2024-10-17 17:41:29.506587] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:16:51.509 [2024-10-17 17:41:29.506652] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.509 [2024-10-17 17:41:29.579977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.509 [2024-10-17 17:41:29.625834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.509 [2024-10-17 17:41:29.625881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.509 [2024-10-17 17:41:29.625891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:51.509 [2024-10-17 17:41:29.625900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:51.509 [2024-10-17 17:41:29.625907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.509 [2024-10-17 17:41:29.627339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.509 [2024-10-17 17:41:29.627437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.509 [2024-10-17 17:41:29.627731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:51.509 [2024-10-17 17:41:29.627732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.509 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:51.509 [2024-10-17 17:41:29.812299] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x129e5c0/0x12a2ab0) succeed. 00:16:51.509 [2024-10-17 17:41:29.822923] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x129fc50/0x12e4150) succeed. 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:29 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.776 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:51.776 Malloc1 00:16:51.776 [2024-10-17 17:41:30.075224] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:51.776 Malloc2 00:16:51.776 Malloc3 00:16:52.034 Malloc4 00:16:52.034 Malloc5 00:16:52.034 Malloc6 00:16:52.034 Malloc7 00:16:52.034 Malloc8 00:16:52.292 Malloc9 00:16:52.292 Malloc10 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=650466 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 650466 /var/tmp/bdevperf.sock 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 650466 ']' 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:16:52.292 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 [2024-10-17 17:41:30.589932] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:16:52.293 [2024-10-17 17:41:30.590009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:52.293 { 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme$subsystem", 00:16:52.293 "trtype": "$TEST_TRANSPORT", 00:16:52.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "$NVMF_PORT", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.293 "hdgst": ${hdgst:-false}, 00:16:52.293 "ddgst": ${ddgst:-false} 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 } 00:16:52.293 EOF 00:16:52.293 )") 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:16:52.293 17:41:30 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme1", 00:16:52.293 "trtype": "rdma", 00:16:52.293 "traddr": "192.168.100.8", 00:16:52.293 "adrfam": "ipv4", 00:16:52.293 "trsvcid": "4420", 00:16:52.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:52.293 "hdgst": false, 00:16:52.293 "ddgst": false 00:16:52.293 }, 00:16:52.293 "method": "bdev_nvme_attach_controller" 00:16:52.293 },{ 00:16:52.293 "params": { 00:16:52.293 "name": "Nvme2", 00:16:52.293 "trtype": "rdma", 00:16:52.294 "traddr": "192.168.100.8", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "4420", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:52.294 "hdgst": false, 00:16:52.294 "ddgst": false 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 },{ 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme3", 00:16:52.294 "trtype": "rdma", 00:16:52.294 "traddr": "192.168.100.8", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "4420", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:52.294 "hdgst": false, 00:16:52.294 "ddgst": false 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 },{ 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme4", 00:16:52.294 "trtype": "rdma", 00:16:52.294 "traddr": "192.168.100.8", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "4420", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:52.294 "hdgst": false, 00:16:52.294 "ddgst": false 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 },{ 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme5", 00:16:52.294 "trtype": "rdma", 00:16:52.294 "traddr": "192.168.100.8", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "4420", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:52.294 "hdgst": false, 00:16:52.294 "ddgst": false 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 },{ 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme6", 00:16:52.294 "trtype": "rdma", 00:16:52.294 "traddr": "192.168.100.8", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "4420", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:52.294 "hdgst": false, 00:16:52.294 "ddgst": false 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 },{ 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme7", 00:16:52.294 "trtype": "rdma", 00:16:52.294 "traddr": "192.168.100.8", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "4420", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:52.294 "hdgst": false, 00:16:52.294 "ddgst": false 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 },{ 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme8", 00:16:52.294 "trtype": "rdma", 00:16:52.294 "traddr": "192.168.100.8", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "4420", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:52.294 "hdgst": false, 00:16:52.294 "ddgst": false 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 },{ 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme9", 00:16:52.294 "trtype": "rdma", 00:16:52.294 "traddr": "192.168.100.8", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "4420", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:52.294 "hdgst": false, 00:16:52.294 "ddgst": false 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 },{ 00:16:52.294 "params": { 00:16:52.294 "name": "Nvme10", 00:16:52.294 "trtype": "rdma", 00:16:52.294 "traddr": "192.168.100.8", 00:16:52.294 "adrfam": "ipv4", 00:16:52.294 "trsvcid": "4420", 00:16:52.294 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:52.294 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:52.294 "hdgst": false, 00:16:52.294 "ddgst": false 00:16:52.294 }, 00:16:52.294 "method": "bdev_nvme_attach_controller" 00:16:52.294 }' 00:16:52.294 [2024-10-17 17:41:30.665244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.552 [2024-10-17 17:41:30.708812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.483 17:41:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.483 17:41:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:16:53.483 17:41:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:53.483 17:41:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.483 17:41:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:53.483 17:41:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.483 17:41:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 650466 00:16:53.483 17:41:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:16:53.483 17:41:31 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:16:54.416 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 650466 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:54.416 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 650237 00:16:54.416 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:54.416 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:54.416 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:16:54.416 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:16:54.416 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.416 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.416 { 00:16:54.416 "params": { 00:16:54.416 "name": "Nvme$subsystem", 00:16:54.416 "trtype": "$TEST_TRANSPORT", 00:16:54.416 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.416 "adrfam": "ipv4", 00:16:54.416 "trsvcid": "$NVMF_PORT", 00:16:54.416 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.416 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.416 "hdgst": ${hdgst:-false}, 00:16:54.416 "ddgst": ${ddgst:-false} 00:16:54.416 }, 00:16:54.416 "method": "bdev_nvme_attach_controller" 00:16:54.416 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.417 { 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme$subsystem", 00:16:54.417 "trtype": "$TEST_TRANSPORT", 00:16:54.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "$NVMF_PORT", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.417 "hdgst": ${hdgst:-false}, 00:16:54.417 "ddgst": ${ddgst:-false} 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.417 { 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme$subsystem", 00:16:54.417 "trtype": "$TEST_TRANSPORT", 00:16:54.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "$NVMF_PORT", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.417 "hdgst": ${hdgst:-false}, 00:16:54.417 "ddgst": ${ddgst:-false} 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.417 { 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme$subsystem", 00:16:54.417 "trtype": "$TEST_TRANSPORT", 00:16:54.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "$NVMF_PORT", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.417 "hdgst": ${hdgst:-false}, 00:16:54.417 "ddgst": ${ddgst:-false} 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.417 { 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme$subsystem", 00:16:54.417 "trtype": "$TEST_TRANSPORT", 00:16:54.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "$NVMF_PORT", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.417 "hdgst": ${hdgst:-false}, 00:16:54.417 "ddgst": ${ddgst:-false} 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.417 { 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme$subsystem", 00:16:54.417 "trtype": "$TEST_TRANSPORT", 00:16:54.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "$NVMF_PORT", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.417 "hdgst": ${hdgst:-false}, 00:16:54.417 "ddgst": ${ddgst:-false} 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 [2024-10-17 17:41:32.626262] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:16:54.417 [2024-10-17 17:41:32.626321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid650787 ] 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.417 { 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme$subsystem", 00:16:54.417 "trtype": "$TEST_TRANSPORT", 00:16:54.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "$NVMF_PORT", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.417 "hdgst": ${hdgst:-false}, 00:16:54.417 "ddgst": ${ddgst:-false} 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.417 { 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme$subsystem", 00:16:54.417 "trtype": "$TEST_TRANSPORT", 00:16:54.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "$NVMF_PORT", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.417 "hdgst": ${hdgst:-false}, 00:16:54.417 "ddgst": ${ddgst:-false} 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.417 { 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme$subsystem", 00:16:54.417 "trtype": "$TEST_TRANSPORT", 00:16:54.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "$NVMF_PORT", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.417 "hdgst": ${hdgst:-false}, 00:16:54.417 "ddgst": ${ddgst:-false} 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:54.417 { 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme$subsystem", 00:16:54.417 "trtype": "$TEST_TRANSPORT", 00:16:54.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "$NVMF_PORT", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.417 "hdgst": ${hdgst:-false}, 00:16:54.417 "ddgst": ${ddgst:-false} 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 } 00:16:54.417 EOF 00:16:54.417 )") 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:16:54.417 17:41:32 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme1", 00:16:54.417 "trtype": "rdma", 00:16:54.417 "traddr": "192.168.100.8", 00:16:54.417 "adrfam": "ipv4", 00:16:54.417 "trsvcid": "4420", 00:16:54.417 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.417 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.417 "hdgst": false, 00:16:54.417 "ddgst": false 00:16:54.417 }, 00:16:54.417 "method": "bdev_nvme_attach_controller" 00:16:54.417 },{ 00:16:54.417 "params": { 00:16:54.417 "name": "Nvme2", 00:16:54.417 "trtype": "rdma", 00:16:54.418 "traddr": "192.168.100.8", 00:16:54.418 "adrfam": "ipv4", 00:16:54.418 "trsvcid": "4420", 00:16:54.418 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:54.418 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:54.418 "hdgst": false, 00:16:54.418 "ddgst": false 00:16:54.418 }, 00:16:54.418 "method": "bdev_nvme_attach_controller" 00:16:54.418 },{ 00:16:54.418 "params": { 00:16:54.418 "name": "Nvme3", 00:16:54.418 "trtype": "rdma", 00:16:54.418 "traddr": "192.168.100.8", 00:16:54.418 "adrfam": "ipv4", 00:16:54.418 "trsvcid": "4420", 00:16:54.418 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:54.418 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:54.418 "hdgst": false, 00:16:54.418 "ddgst": false 00:16:54.418 }, 00:16:54.418 "method": "bdev_nvme_attach_controller" 00:16:54.418 },{ 00:16:54.418 "params": { 00:16:54.418 "name": "Nvme4", 00:16:54.418 "trtype": "rdma", 00:16:54.418 "traddr": "192.168.100.8", 00:16:54.418 "adrfam": "ipv4", 00:16:54.418 "trsvcid": "4420", 00:16:54.418 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:54.418 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:54.418 "hdgst": false, 00:16:54.418 "ddgst": false 00:16:54.418 }, 00:16:54.418 "method": "bdev_nvme_attach_controller" 00:16:54.418 },{ 00:16:54.418 "params": { 00:16:54.418 "name": "Nvme5", 00:16:54.418 "trtype": "rdma", 00:16:54.418 "traddr": "192.168.100.8", 00:16:54.418 "adrfam": "ipv4", 00:16:54.418 "trsvcid": "4420", 00:16:54.418 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:54.418 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:54.418 "hdgst": false, 00:16:54.418 "ddgst": false 00:16:54.418 }, 00:16:54.418 "method": "bdev_nvme_attach_controller" 00:16:54.418 },{ 00:16:54.418 "params": { 00:16:54.418 "name": "Nvme6", 00:16:54.418 "trtype": "rdma", 00:16:54.418 "traddr": "192.168.100.8", 00:16:54.418 "adrfam": "ipv4", 00:16:54.418 "trsvcid": "4420", 00:16:54.418 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:54.418 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:54.418 "hdgst": false, 00:16:54.418 "ddgst": false 00:16:54.418 }, 00:16:54.418 "method": "bdev_nvme_attach_controller" 00:16:54.418 },{ 00:16:54.418 "params": { 00:16:54.418 "name": "Nvme7", 00:16:54.418 "trtype": "rdma", 00:16:54.418 "traddr": "192.168.100.8", 00:16:54.418 "adrfam": "ipv4", 00:16:54.418 "trsvcid": "4420", 00:16:54.418 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:54.418 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:54.418 "hdgst": false, 00:16:54.418 "ddgst": false 00:16:54.418 }, 00:16:54.418 "method": "bdev_nvme_attach_controller" 00:16:54.418 },{ 00:16:54.418 "params": { 00:16:54.418 "name": "Nvme8", 00:16:54.418 "trtype": "rdma", 00:16:54.418 "traddr": "192.168.100.8", 00:16:54.418 "adrfam": "ipv4", 00:16:54.418 "trsvcid": "4420", 00:16:54.418 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:54.418 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:54.418 "hdgst": false, 00:16:54.418 "ddgst": false 00:16:54.418 }, 00:16:54.418 "method": "bdev_nvme_attach_controller" 00:16:54.418 },{ 00:16:54.418 "params": { 00:16:54.418 "name": "Nvme9", 00:16:54.418 "trtype": "rdma", 00:16:54.418 "traddr": "192.168.100.8", 00:16:54.418 "adrfam": "ipv4", 00:16:54.418 "trsvcid": "4420", 00:16:54.418 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:54.418 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:54.418 "hdgst": false, 00:16:54.418 "ddgst": false 00:16:54.418 }, 00:16:54.418 "method": "bdev_nvme_attach_controller" 00:16:54.418 },{ 00:16:54.418 "params": { 00:16:54.418 "name": "Nvme10", 00:16:54.418 "trtype": "rdma", 00:16:54.418 "traddr": "192.168.100.8", 00:16:54.418 "adrfam": "ipv4", 00:16:54.418 "trsvcid": "4420", 00:16:54.418 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:54.418 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:54.418 "hdgst": false, 00:16:54.418 "ddgst": false 00:16:54.418 }, 00:16:54.418 "method": "bdev_nvme_attach_controller" 00:16:54.418 }' 00:16:54.418 [2024-10-17 17:41:32.704100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.418 [2024-10-17 17:41:32.746892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.350 Running I/O for 1 seconds... 00:16:56.722 3290.00 IOPS, 205.62 MiB/s 00:16:56.722 Latency(us) 00:16:56.722 [2024-10-17T15:41:35.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.722 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme1n1 : 1.16 350.26 21.89 0.00 0.00 180275.68 7693.36 200597.15 00:16:56.722 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme2n1 : 1.17 356.76 22.30 0.00 0.00 174757.28 7978.30 193302.71 00:16:56.722 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme3n1 : 1.17 365.82 22.86 0.00 0.00 168144.72 5955.23 182361.04 00:16:56.722 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme4n1 : 1.17 369.72 23.11 0.00 0.00 164120.16 5271.37 175978.41 00:16:56.722 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme5n1 : 1.17 369.31 23.08 0.00 0.00 162215.06 8947.09 165036.74 00:16:56.722 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme6n1 : 1.17 382.46 23.90 0.00 0.00 154895.84 9744.92 109416.63 00:16:56.722 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme7n1 : 1.17 382.03 23.88 0.00 0.00 152448.80 10314.80 99842.67 00:16:56.722 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme8n1 : 1.17 381.66 23.85 0.00 0.00 150166.99 10542.75 97563.16 00:16:56.722 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme9n1 : 1.18 381.14 23.82 0.00 0.00 148952.31 11283.59 105313.50 00:16:56.722 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.722 Verification LBA range: start 0x0 length 0x400 00:16:56.722 Nvme10n1 : 1.18 379.89 23.74 0.00 0.00 146955.18 2934.87 122181.90 00:16:56.722 [2024-10-17T15:41:35.113Z] =================================================================================================================== 00:16:56.722 [2024-10-17T15:41:35.113Z] Total : 3719.05 232.44 0.00 0.00 159944.40 2934.87 200597.15 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:16:56.722 rmmod nvme_rdma 00:16:56.722 rmmod nvme_fabrics 00:16:56.722 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 650237 ']' 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 650237 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 650237 ']' 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 650237 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 650237 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 650237' 00:16:56.980 killing process with pid 650237 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 650237 00:16:56.980 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 650237 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:16:57.547 00:16:57.547 real 0m12.509s 00:16:57.547 user 0m28.518s 00:16:57.547 sys 0m5.882s 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:57.547 ************************************ 00:16:57.547 END TEST nvmf_shutdown_tc1 00:16:57.547 ************************************ 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:57.547 ************************************ 00:16:57.547 START TEST nvmf_shutdown_tc2 00:16:57.547 ************************************ 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:16:57.547 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:16:57.548 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:16:57.548 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:16:57.548 Found net devices under 0000:18:00.0: mlx_0_0 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:16:57.548 Found net devices under 0000:18:00.1: mlx_0_1 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # rdma_device_init 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # uname 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe ib_core 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.548 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:16:57.549 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.549 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:16:57.549 altname enp24s0f0np0 00:16:57.549 altname ens785f0np0 00:16:57.549 inet 192.168.100.8/24 scope global mlx_0_0 00:16:57.549 valid_lft forever preferred_lft forever 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:16:57.549 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.549 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:16:57.549 altname enp24s0f1np1 00:16:57.549 altname ens785f1np1 00:16:57.549 inet 192.168.100.9/24 scope global mlx_0_1 00:16:57.549 valid_lft forever preferred_lft forever 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@109 -- # continue 2 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:16:57.549 192.168.100.9' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:16:57.549 192.168.100.9' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # head -n 1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:16:57.549 192.168.100.9' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # tail -n +2 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # head -n 1 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=651327 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 651327 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 651327 ']' 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:57.549 17:41:35 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:57.807 [2024-10-17 17:41:35.981598] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:16:57.807 [2024-10-17 17:41:35.981658] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.807 [2024-10-17 17:41:36.053703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:57.807 [2024-10-17 17:41:36.099558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.807 [2024-10-17 17:41:36.099601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.807 [2024-10-17 17:41:36.099610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.807 [2024-10-17 17:41:36.099635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.807 [2024-10-17 17:41:36.099643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.807 [2024-10-17 17:41:36.101166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.807 [2024-10-17 17:41:36.101244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.807 [2024-10-17 17:41:36.101326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:57.807 [2024-10-17 17:41:36.101328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:58.065 [2024-10-17 17:41:36.277179] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x119f5c0/0x11a3ab0) succeed. 00:16:58.065 [2024-10-17 17:41:36.287688] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11a0c50/0x11e5150) succeed. 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.065 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.323 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:58.323 Malloc1 00:16:58.323 [2024-10-17 17:41:36.534474] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:58.323 Malloc2 00:16:58.323 Malloc3 00:16:58.323 Malloc4 00:16:58.323 Malloc5 00:16:58.581 Malloc6 00:16:58.581 Malloc7 00:16:58.581 Malloc8 00:16:58.581 Malloc9 00:16:58.581 Malloc10 00:16:58.581 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.581 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:16:58.581 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:58.581 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:58.839 17:41:36 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=651554 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 651554 /var/tmp/bdevperf.sock 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 651554 ']' 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.839 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.839 { 00:16:58.839 "params": { 00:16:58.839 "name": "Nvme$subsystem", 00:16:58.839 "trtype": "$TEST_TRANSPORT", 00:16:58.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.839 "adrfam": "ipv4", 00:16:58.839 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.840 { 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme$subsystem", 00:16:58.840 "trtype": "$TEST_TRANSPORT", 00:16:58.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.840 { 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme$subsystem", 00:16:58.840 "trtype": "$TEST_TRANSPORT", 00:16:58.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.840 { 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme$subsystem", 00:16:58.840 "trtype": "$TEST_TRANSPORT", 00:16:58.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.840 { 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme$subsystem", 00:16:58.840 "trtype": "$TEST_TRANSPORT", 00:16:58.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.840 { 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme$subsystem", 00:16:58.840 "trtype": "$TEST_TRANSPORT", 00:16:58.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 [2024-10-17 17:41:37.047003] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:16:58.840 [2024-10-17 17:41:37.047062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid651554 ] 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.840 { 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme$subsystem", 00:16:58.840 "trtype": "$TEST_TRANSPORT", 00:16:58.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.840 { 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme$subsystem", 00:16:58.840 "trtype": "$TEST_TRANSPORT", 00:16:58.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.840 { 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme$subsystem", 00:16:58.840 "trtype": "$TEST_TRANSPORT", 00:16:58.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:16:58.840 { 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme$subsystem", 00:16:58.840 "trtype": "$TEST_TRANSPORT", 00:16:58.840 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "$NVMF_PORT", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.840 "hdgst": ${hdgst:-false}, 00:16:58.840 "ddgst": ${ddgst:-false} 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 } 00:16:58.840 EOF 00:16:58.840 )") 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:16:58.840 17:41:37 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme1", 00:16:58.840 "trtype": "rdma", 00:16:58.840 "traddr": "192.168.100.8", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "4420", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.840 "hdgst": false, 00:16:58.840 "ddgst": false 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 },{ 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme2", 00:16:58.840 "trtype": "rdma", 00:16:58.840 "traddr": "192.168.100.8", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "4420", 00:16:58.840 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:58.840 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:58.840 "hdgst": false, 00:16:58.840 "ddgst": false 00:16:58.840 }, 00:16:58.840 "method": "bdev_nvme_attach_controller" 00:16:58.840 },{ 00:16:58.840 "params": { 00:16:58.840 "name": "Nvme3", 00:16:58.840 "trtype": "rdma", 00:16:58.840 "traddr": "192.168.100.8", 00:16:58.840 "adrfam": "ipv4", 00:16:58.840 "trsvcid": "4420", 00:16:58.841 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:58.841 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:58.841 "hdgst": false, 00:16:58.841 "ddgst": false 00:16:58.841 }, 00:16:58.841 "method": "bdev_nvme_attach_controller" 00:16:58.841 },{ 00:16:58.841 "params": { 00:16:58.841 "name": "Nvme4", 00:16:58.841 "trtype": "rdma", 00:16:58.841 "traddr": "192.168.100.8", 00:16:58.841 "adrfam": "ipv4", 00:16:58.841 "trsvcid": "4420", 00:16:58.841 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:58.841 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:58.841 "hdgst": false, 00:16:58.841 "ddgst": false 00:16:58.841 }, 00:16:58.841 "method": "bdev_nvme_attach_controller" 00:16:58.841 },{ 00:16:58.841 "params": { 00:16:58.841 "name": "Nvme5", 00:16:58.841 "trtype": "rdma", 00:16:58.841 "traddr": "192.168.100.8", 00:16:58.841 "adrfam": "ipv4", 00:16:58.841 "trsvcid": "4420", 00:16:58.841 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:58.841 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:58.841 "hdgst": false, 00:16:58.841 "ddgst": false 00:16:58.841 }, 00:16:58.841 "method": "bdev_nvme_attach_controller" 00:16:58.841 },{ 00:16:58.841 "params": { 00:16:58.841 "name": "Nvme6", 00:16:58.841 "trtype": "rdma", 00:16:58.841 "traddr": "192.168.100.8", 00:16:58.841 "adrfam": "ipv4", 00:16:58.841 "trsvcid": "4420", 00:16:58.841 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:58.841 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:58.841 "hdgst": false, 00:16:58.841 "ddgst": false 00:16:58.841 }, 00:16:58.841 "method": "bdev_nvme_attach_controller" 00:16:58.841 },{ 00:16:58.841 "params": { 00:16:58.841 "name": "Nvme7", 00:16:58.841 "trtype": "rdma", 00:16:58.841 "traddr": "192.168.100.8", 00:16:58.841 "adrfam": "ipv4", 00:16:58.841 "trsvcid": "4420", 00:16:58.841 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:58.841 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:58.841 "hdgst": false, 00:16:58.841 "ddgst": false 00:16:58.841 }, 00:16:58.841 "method": "bdev_nvme_attach_controller" 00:16:58.841 },{ 00:16:58.841 "params": { 00:16:58.841 "name": "Nvme8", 00:16:58.841 "trtype": "rdma", 00:16:58.841 "traddr": "192.168.100.8", 00:16:58.841 "adrfam": "ipv4", 00:16:58.841 "trsvcid": "4420", 00:16:58.841 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:58.841 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:58.841 "hdgst": false, 00:16:58.841 "ddgst": false 00:16:58.841 }, 00:16:58.841 "method": "bdev_nvme_attach_controller" 00:16:58.841 },{ 00:16:58.841 "params": { 00:16:58.841 "name": "Nvme9", 00:16:58.841 "trtype": "rdma", 00:16:58.841 "traddr": "192.168.100.8", 00:16:58.841 "adrfam": "ipv4", 00:16:58.841 "trsvcid": "4420", 00:16:58.841 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:58.841 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:58.841 "hdgst": false, 00:16:58.841 "ddgst": false 00:16:58.841 }, 00:16:58.841 "method": "bdev_nvme_attach_controller" 00:16:58.841 },{ 00:16:58.841 "params": { 00:16:58.841 "name": "Nvme10", 00:16:58.841 "trtype": "rdma", 00:16:58.841 "traddr": "192.168.100.8", 00:16:58.841 "adrfam": "ipv4", 00:16:58.841 "trsvcid": "4420", 00:16:58.841 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:58.841 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:58.841 "hdgst": false, 00:16:58.841 "ddgst": false 00:16:58.841 }, 00:16:58.841 "method": "bdev_nvme_attach_controller" 00:16:58.841 }' 00:16:58.841 [2024-10-17 17:41:37.120653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.841 [2024-10-17 17:41:37.163438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.772 Running I/O for 10 seconds... 00:16:59.772 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:59.772 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:16:59.772 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:59.772 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.772 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:00.029 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.029 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:00.029 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:00.029 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:17:00.029 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:17:00.029 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:17:00.029 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:17:00.030 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:00.030 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:00.030 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:00.030 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.030 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:00.030 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.030 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=4 00:17:00.030 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 4 -ge 100 ']' 00:17:00.030 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:17:00.287 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:17:00.287 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:00.287 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:00.287 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:00.287 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.287 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=163 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 163 -ge 100 ']' 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 651554 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 651554 ']' 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 651554 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 651554 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 651554' 00:17:00.545 killing process with pid 651554 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 651554 00:17:00.545 17:41:38 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 651554 00:17:00.545 Received shutdown signal, test time was about 0.823717 seconds 00:17:00.545 00:17:00.545 Latency(us) 00:17:00.545 [2024-10-17T15:41:38.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.545 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme1n1 : 0.81 356.15 22.26 0.00 0.00 176409.92 5527.82 206979.78 00:17:00.545 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme2n1 : 0.81 354.38 22.15 0.00 0.00 173797.28 8092.27 199685.34 00:17:00.545 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme3n1 : 0.81 355.09 22.19 0.00 0.00 170076.41 8434.20 193302.71 00:17:00.545 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme4n1 : 0.81 393.96 24.62 0.00 0.00 150253.17 5299.87 129476.34 00:17:00.545 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme5n1 : 0.81 393.26 24.58 0.00 0.00 147869.16 9289.02 119446.48 00:17:00.545 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme6n1 : 0.81 392.70 24.54 0.00 0.00 144594.05 9630.94 111696.14 00:17:00.545 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme7n1 : 0.82 392.04 24.50 0.00 0.00 142205.64 10143.83 106225.31 00:17:00.545 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme8n1 : 0.82 391.39 24.46 0.00 0.00 139301.93 10599.74 99386.77 00:17:00.545 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme9n1 : 0.82 390.64 24.41 0.00 0.00 137142.63 11397.57 95283.65 00:17:00.545 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:00.545 Verification LBA range: start 0x0 length 0x400 00:17:00.545 Nvme10n1 : 0.82 311.04 19.44 0.00 0.00 167963.66 2963.37 212450.62 00:17:00.545 [2024-10-17T15:41:38.936Z] =================================================================================================================== 00:17:00.545 [2024-10-17T15:41:38.936Z] Total : 3730.64 233.16 0.00 0.00 154098.02 2963.37 212450.62 00:17:00.803 17:41:39 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 651327 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.735 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:01.735 rmmod nvme_rdma 00:17:01.993 rmmod nvme_fabrics 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 651327 ']' 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 651327 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 651327 ']' 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 651327 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 651327 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 651327' 00:17:01.993 killing process with pid 651327 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 651327 00:17:01.993 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 651327 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:17:02.560 00:17:02.560 real 0m4.951s 00:17:02.560 user 0m19.883s 00:17:02.560 sys 0m1.160s 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 ************************************ 00:17:02.560 END TEST nvmf_shutdown_tc2 00:17:02.560 ************************************ 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 ************************************ 00:17:02.560 START TEST nvmf_shutdown_tc3 00:17:02.560 ************************************ 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:17:02.560 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:17:02.560 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.560 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:02.561 Found net devices under 0000:18:00.0: mlx_0_0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:02.561 Found net devices under 0000:18:00.1: mlx_0_1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # rdma_device_init 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # uname 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:02.561 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:02.561 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:17:02.561 altname enp24s0f0np0 00:17:02.561 altname ens785f0np0 00:17:02.561 inet 192.168.100.8/24 scope global mlx_0_0 00:17:02.561 valid_lft forever preferred_lft forever 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:02.561 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:02.561 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:17:02.561 altname enp24s0f1np1 00:17:02.561 altname ens785f1np1 00:17:02.561 inet 192.168.100.9/24 scope global mlx_0_1 00:17:02.561 valid_lft forever preferred_lft forever 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@109 -- # continue 2 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:02.561 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:02.562 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:02.562 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:17:02.562 192.168.100.9' 00:17:02.562 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:17:02.562 192.168.100.9' 00:17:02.562 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # head -n 1 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:17:02.819 192.168.100.9' 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # tail -n +2 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # head -n 1 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=652115 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 652115 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 652115 ']' 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:02.819 17:41:40 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:02.819 [2024-10-17 17:41:41.047661] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:02.819 [2024-10-17 17:41:41.047730] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.819 [2024-10-17 17:41:41.122845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.819 [2024-10-17 17:41:41.170251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.819 [2024-10-17 17:41:41.170296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.819 [2024-10-17 17:41:41.170306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.819 [2024-10-17 17:41:41.170314] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.819 [2024-10-17 17:41:41.170321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.819 [2024-10-17 17:41:41.171805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.819 [2024-10-17 17:41:41.171883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.819 [2024-10-17 17:41:41.171908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:02.819 [2024-10-17 17:41:41.171910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:03.078 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:03.078 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:17:03.078 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:03.078 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.078 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:03.078 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.078 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:03.078 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.078 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:03.078 [2024-10-17 17:41:41.352767] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22695c0/0x226dab0) succeed. 00:17:03.078 [2024-10-17 17:41:41.363280] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x226ac50/0x22af150) succeed. 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.337 17:41:41 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:03.337 Malloc1 00:17:03.337 [2024-10-17 17:41:41.610857] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:03.337 Malloc2 00:17:03.337 Malloc3 00:17:03.337 Malloc4 00:17:03.595 Malloc5 00:17:03.595 Malloc6 00:17:03.595 Malloc7 00:17:03.595 Malloc8 00:17:03.595 Malloc9 00:17:03.854 Malloc10 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=652278 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 652278 /var/tmp/bdevperf.sock 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 652278 ']' 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:03.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.854 { 00:17:03.854 "params": { 00:17:03.854 "name": "Nvme$subsystem", 00:17:03.854 "trtype": "$TEST_TRANSPORT", 00:17:03.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.854 "adrfam": "ipv4", 00:17:03.854 "trsvcid": "$NVMF_PORT", 00:17:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.854 "hdgst": ${hdgst:-false}, 00:17:03.854 "ddgst": ${ddgst:-false} 00:17:03.854 }, 00:17:03.854 "method": "bdev_nvme_attach_controller" 00:17:03.854 } 00:17:03.854 EOF 00:17:03.854 )") 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.854 { 00:17:03.854 "params": { 00:17:03.854 "name": "Nvme$subsystem", 00:17:03.854 "trtype": "$TEST_TRANSPORT", 00:17:03.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.854 "adrfam": "ipv4", 00:17:03.854 "trsvcid": "$NVMF_PORT", 00:17:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.854 "hdgst": ${hdgst:-false}, 00:17:03.854 "ddgst": ${ddgst:-false} 00:17:03.854 }, 00:17:03.854 "method": "bdev_nvme_attach_controller" 00:17:03.854 } 00:17:03.854 EOF 00:17:03.854 )") 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.854 { 00:17:03.854 "params": { 00:17:03.854 "name": "Nvme$subsystem", 00:17:03.854 "trtype": "$TEST_TRANSPORT", 00:17:03.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.854 "adrfam": "ipv4", 00:17:03.854 "trsvcid": "$NVMF_PORT", 00:17:03.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.854 "hdgst": ${hdgst:-false}, 00:17:03.854 "ddgst": ${ddgst:-false} 00:17:03.854 }, 00:17:03.854 "method": "bdev_nvme_attach_controller" 00:17:03.854 } 00:17:03.854 EOF 00:17:03.854 )") 00:17:03.854 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.855 { 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme$subsystem", 00:17:03.855 "trtype": "$TEST_TRANSPORT", 00:17:03.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "$NVMF_PORT", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.855 "hdgst": ${hdgst:-false}, 00:17:03.855 "ddgst": ${ddgst:-false} 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 } 00:17:03.855 EOF 00:17:03.855 )") 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.855 { 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme$subsystem", 00:17:03.855 "trtype": "$TEST_TRANSPORT", 00:17:03.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "$NVMF_PORT", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.855 "hdgst": ${hdgst:-false}, 00:17:03.855 "ddgst": ${ddgst:-false} 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 } 00:17:03.855 EOF 00:17:03.855 )") 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.855 { 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme$subsystem", 00:17:03.855 "trtype": "$TEST_TRANSPORT", 00:17:03.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "$NVMF_PORT", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.855 "hdgst": ${hdgst:-false}, 00:17:03.855 "ddgst": ${ddgst:-false} 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 } 00:17:03.855 EOF 00:17:03.855 )") 00:17:03.855 [2024-10-17 17:41:42.105018] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:03.855 [2024-10-17 17:41:42.105072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid652278 ] 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.855 { 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme$subsystem", 00:17:03.855 "trtype": "$TEST_TRANSPORT", 00:17:03.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "$NVMF_PORT", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.855 "hdgst": ${hdgst:-false}, 00:17:03.855 "ddgst": ${ddgst:-false} 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 } 00:17:03.855 EOF 00:17:03.855 )") 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.855 { 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme$subsystem", 00:17:03.855 "trtype": "$TEST_TRANSPORT", 00:17:03.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "$NVMF_PORT", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.855 "hdgst": ${hdgst:-false}, 00:17:03.855 "ddgst": ${ddgst:-false} 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 } 00:17:03.855 EOF 00:17:03.855 )") 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.855 { 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme$subsystem", 00:17:03.855 "trtype": "$TEST_TRANSPORT", 00:17:03.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "$NVMF_PORT", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.855 "hdgst": ${hdgst:-false}, 00:17:03.855 "ddgst": ${ddgst:-false} 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 } 00:17:03.855 EOF 00:17:03.855 )") 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:03.855 { 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme$subsystem", 00:17:03.855 "trtype": "$TEST_TRANSPORT", 00:17:03.855 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "$NVMF_PORT", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:03.855 "hdgst": ${hdgst:-false}, 00:17:03.855 "ddgst": ${ddgst:-false} 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 } 00:17:03.855 EOF 00:17:03.855 )") 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:17:03.855 17:41:42 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme1", 00:17:03.855 "trtype": "rdma", 00:17:03.855 "traddr": "192.168.100.8", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "4420", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.855 "hdgst": false, 00:17:03.855 "ddgst": false 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 },{ 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme2", 00:17:03.855 "trtype": "rdma", 00:17:03.855 "traddr": "192.168.100.8", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "4420", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:03.855 "hdgst": false, 00:17:03.855 "ddgst": false 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 },{ 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme3", 00:17:03.855 "trtype": "rdma", 00:17:03.855 "traddr": "192.168.100.8", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "4420", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:03.855 "hdgst": false, 00:17:03.855 "ddgst": false 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 },{ 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme4", 00:17:03.855 "trtype": "rdma", 00:17:03.855 "traddr": "192.168.100.8", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "4420", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:03.855 "hdgst": false, 00:17:03.855 "ddgst": false 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 },{ 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme5", 00:17:03.855 "trtype": "rdma", 00:17:03.855 "traddr": "192.168.100.8", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "4420", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:03.855 "hdgst": false, 00:17:03.855 "ddgst": false 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 },{ 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme6", 00:17:03.855 "trtype": "rdma", 00:17:03.855 "traddr": "192.168.100.8", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "4420", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:03.855 "hdgst": false, 00:17:03.855 "ddgst": false 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 },{ 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme7", 00:17:03.855 "trtype": "rdma", 00:17:03.855 "traddr": "192.168.100.8", 00:17:03.855 "adrfam": "ipv4", 00:17:03.855 "trsvcid": "4420", 00:17:03.855 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:03.855 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:03.855 "hdgst": false, 00:17:03.855 "ddgst": false 00:17:03.855 }, 00:17:03.855 "method": "bdev_nvme_attach_controller" 00:17:03.855 },{ 00:17:03.855 "params": { 00:17:03.855 "name": "Nvme8", 00:17:03.856 "trtype": "rdma", 00:17:03.856 "traddr": "192.168.100.8", 00:17:03.856 "adrfam": "ipv4", 00:17:03.856 "trsvcid": "4420", 00:17:03.856 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:03.856 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:03.856 "hdgst": false, 00:17:03.856 "ddgst": false 00:17:03.856 }, 00:17:03.856 "method": "bdev_nvme_attach_controller" 00:17:03.856 },{ 00:17:03.856 "params": { 00:17:03.856 "name": "Nvme9", 00:17:03.856 "trtype": "rdma", 00:17:03.856 "traddr": "192.168.100.8", 00:17:03.856 "adrfam": "ipv4", 00:17:03.856 "trsvcid": "4420", 00:17:03.856 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:03.856 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:03.856 "hdgst": false, 00:17:03.856 "ddgst": false 00:17:03.856 }, 00:17:03.856 "method": "bdev_nvme_attach_controller" 00:17:03.856 },{ 00:17:03.856 "params": { 00:17:03.856 "name": "Nvme10", 00:17:03.856 "trtype": "rdma", 00:17:03.856 "traddr": "192.168.100.8", 00:17:03.856 "adrfam": "ipv4", 00:17:03.856 "trsvcid": "4420", 00:17:03.856 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:03.856 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:03.856 "hdgst": false, 00:17:03.856 "ddgst": false 00:17:03.856 }, 00:17:03.856 "method": "bdev_nvme_attach_controller" 00:17:03.856 }' 00:17:03.856 [2024-10-17 17:41:42.179498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.856 [2024-10-17 17:41:42.222363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.787 Running I/O for 10 seconds... 00:17:04.787 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.787 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:17:04.787 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:04.787 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.787 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:17:05.045 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:17:05.302 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:17:05.302 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:17:05.303 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:05.303 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:17:05.303 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.303 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=159 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 159 -ge 100 ']' 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 652115 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 652115 ']' 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 652115 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 652115 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 652115' 00:17:05.561 killing process with pid 652115 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 652115 00:17:05.561 17:41:43 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 652115 00:17:06.077 2674.00 IOPS, 167.12 MiB/s [2024-10-17T15:41:44.468Z] 17:41:44 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:17:06.647 [2024-10-17 17:41:44.905019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.905064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.905077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.905086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.905095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.905103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.905118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.905127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.906790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.647 [2024-10-17 17:41:44.906849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:06.647 [2024-10-17 17:41:44.906909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.906943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.906975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.907005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.907037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.907068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.907100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.907131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.908867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.647 [2024-10-17 17:41:44.908908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:06.647 [2024-10-17 17:41:44.908960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.908993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.909025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.909056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.909087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.909118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.909156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.909165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.910597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.647 [2024-10-17 17:41:44.910640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:06.647 [2024-10-17 17:41:44.910689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.910722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.910763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.910794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.910826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.910857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.910888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.910918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.912758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.647 [2024-10-17 17:41:44.912799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:06.647 [2024-10-17 17:41:44.912853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.912885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.912917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.912947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.912981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.913012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.913043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.913073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.914515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.647 [2024-10-17 17:41:44.914556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:06.647 [2024-10-17 17:41:44.914602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.914635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.914666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.914697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.914729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.914759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.914791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.914821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.916716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.647 [2024-10-17 17:41:44.916756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:06.647 [2024-10-17 17:41:44.916806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.916839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.916871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.916901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.916933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.916963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.916994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.647 [2024-10-17 17:41:44.917024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.647 [2024-10-17 17:41:44.918404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.647 [2024-10-17 17:41:44.918463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:06.648 [2024-10-17 17:41:44.918517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.918550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.918582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.918612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.918644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.918674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.918706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.918736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.920566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.648 [2024-10-17 17:41:44.920607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:06.648 [2024-10-17 17:41:44.920654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.920686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.920718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.920748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.920787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.920818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.920850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.920879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.922651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.648 [2024-10-17 17:41:44.922692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:06.648 [2024-10-17 17:41:44.922741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.922774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.922806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.922836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.922869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.922899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.922931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:17:06.648 [2024-10-17 17:41:44.922960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:62456 cdw0:0 sqhd:af00 p:1 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.924810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:06.648 [2024-10-17 17:41:44.924851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:06.648 [2024-10-17 17:41:44.926921] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200010602bc0 was disconnected and freed. reset controller. 00:17:06.648 [2024-10-17 17:41:44.926964] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.648 [2024-10-17 17:41:44.929040] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200010602900 was disconnected and freed. reset controller. 00:17:06.648 [2024-10-17 17:41:44.929062] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.648 [2024-10-17 17:41:44.930859] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200010602640 was disconnected and freed. reset controller. 00:17:06.648 [2024-10-17 17:41:44.930900] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.648 [2024-10-17 17:41:44.932649] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200010602380 was disconnected and freed. reset controller. 00:17:06.648 [2024-10-17 17:41:44.932690] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.648 [2024-10-17 17:41:44.934196] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000106020c0 was disconnected and freed. reset controller. 00:17:06.648 [2024-10-17 17:41:44.934237] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.648 [2024-10-17 17:41:44.935885] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200010601e00 was disconnected and freed. reset controller. 00:17:06.648 [2024-10-17 17:41:44.935925] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.648 [2024-10-17 17:41:44.936078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001803f280 len:0x10000 key:0x182700 00:17:06.648 [2024-10-17 17:41:44.936116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001802f200 len:0x10000 key:0x182700 00:17:06.648 [2024-10-17 17:41:44.936218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001801f180 len:0x10000 key:0x182700 00:17:06.648 [2024-10-17 17:41:44.936295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001800f100 len:0x10000 key:0x182700 00:17:06.648 [2024-10-17 17:41:44.936370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000183f0000 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000183dff80 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000183cff00 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000183bfe80 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000183afe00 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001839fd80 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001838fd00 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001837fc80 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001836fc00 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001835fb80 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001834fb00 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001833fa80 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001832fa00 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001831f980 len:0x10000 key:0x183100 00:17:06.648 [2024-10-17 17:41:44.936935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.936954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200017eefe00 len:0x10000 key:0x181e00 00:17:06.648 [2024-10-17 17:41:44.936968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.648 [2024-10-17 17:41:44.938347] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200010601b40 was disconnected and freed. reset controller. 00:17:06.649 [2024-10-17 17:41:44.938366] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.649 [2024-10-17 17:41:44.938453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000185f0000 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000185dff80 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000185cff00 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000185bfe80 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000185afe00 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001859fd80 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001858fd00 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001857fc80 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001856fc00 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001855fb80 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001854fb00 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001853fa80 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001852fa00 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001851f980 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001850f900 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184ff880 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.938974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.938993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184ef800 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184df780 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184cf700 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184bf680 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000184af600 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001849f580 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001848f500 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001847f480 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001846f400 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001845f380 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001844f300 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001843f280 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001842f200 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001841f180 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001840f100 len:0x10000 key:0x182c00 00:17:06.649 [2024-10-17 17:41:44.939474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187f0000 len:0x10000 key:0x182d00 00:17:06.649 [2024-10-17 17:41:44.939506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187dff80 len:0x10000 key:0x182d00 00:17:06.649 [2024-10-17 17:41:44.939539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187cff00 len:0x10000 key:0x182d00 00:17:06.649 [2024-10-17 17:41:44.939572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187bfe80 len:0x10000 key:0x182d00 00:17:06.649 [2024-10-17 17:41:44.939605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000187afe00 len:0x10000 key:0x182d00 00:17:06.649 [2024-10-17 17:41:44.939640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001879fd80 len:0x10000 key:0x182d00 00:17:06.649 [2024-10-17 17:41:44.939673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.649 [2024-10-17 17:41:44.939691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001878fd00 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.939705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.939724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001877fc80 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.939738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.939757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001876fc00 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.939771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.939790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001875fb80 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.939804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.939822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001874fb00 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.939837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.939855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001873fa80 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.939869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.939889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001872fa00 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.939903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.939922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001871f980 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.939935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.939954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001870f900 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.939968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.939987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186ff880 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186ef800 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186df780 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186cf700 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186bf680 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000186af600 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001869f580 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001868f500 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001867f480 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001866f400 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001865f380 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001864f300 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001863f280 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001862f200 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001861f180 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001860f100 len:0x10000 key:0x182d00 00:17:06.650 [2024-10-17 17:41:44.940534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000189f0000 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.940567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.940586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001820f700 len:0x10000 key:0x183100 00:17:06.650 [2024-10-17 17:41:44.940599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.942773] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200010601880 was disconnected and freed. reset controller. 00:17:06.650 [2024-10-17 17:41:44.942794] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.650 [2024-10-17 17:41:44.942814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188cfd00 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.942829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.942852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188bfc80 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.942867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.942886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188afc00 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.942900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.942920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001889fb80 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.942933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.942952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001888fb00 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.942970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.942989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001887fa80 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.943003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.943022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001886fa00 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.943037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.943055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001885f980 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.943069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.943088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001884f900 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.943102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.943121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001883f880 len:0x10000 key:0x183200 00:17:06.650 [2024-10-17 17:41:44.943135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.650 [2024-10-17 17:41:44.943154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001882f800 len:0x10000 key:0x183200 00:17:06.651 [2024-10-17 17:41:44.943168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001881f780 len:0x10000 key:0x183200 00:17:06.651 [2024-10-17 17:41:44.943200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001880f700 len:0x10000 key:0x183200 00:17:06.651 [2024-10-17 17:41:44.943233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bf0000 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bdff80 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bcff00 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bbfe80 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018bafe00 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b9fd80 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b8fd00 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b7fc80 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b6fc00 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b5fb80 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b4fb00 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b3fa80 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b2fa00 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b1f980 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018b0f900 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aff880 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aef800 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018adf780 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018acf700 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018abf680 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018aaf600 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a9f580 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.943970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.943989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a8f500 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.944003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.944022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a7f480 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.944036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.944054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a6f400 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.944068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.944087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a5f380 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.944104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.944123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a4f300 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.944137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.944155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a3f280 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.944169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.944188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a2f200 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.944202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.651 [2024-10-17 17:41:44.944221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a1f180 len:0x10000 key:0x182b00 00:17:06.651 [2024-10-17 17:41:44.944235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018a0f100 len:0x10000 key:0x182b00 00:17:06.652 [2024-10-17 17:41:44.944269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018df0000 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ddff80 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dcff00 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dbfe80 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018dafe00 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d9fd80 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d8fd00 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d7fc80 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d6fc00 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d5fb80 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d4fb00 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d3fa80 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d2fa00 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d1f980 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018d0f900 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cff880 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cef800 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cdf780 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ccf700 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.944937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.944955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000188dfd80 len:0x10000 key:0x183200 00:17:06.652 [2024-10-17 17:41:44.944969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947100] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000106015c0 was disconnected and freed. reset controller. 00:17:06.652 [2024-10-17 17:41:44.947119] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.652 [2024-10-17 17:41:44.947138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edfd80 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfd00 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfc80 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafc00 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fb80 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fb00 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fa80 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fa00 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5f980 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4f900 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3f880 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f800 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f780 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f700 len:0x10000 key:0x181000 00:17:06.652 [2024-10-17 17:41:44.947589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018cbf680 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.947620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018caf600 len:0x10000 key:0x183000 00:17:06.652 [2024-10-17 17:41:44.947651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.652 [2024-10-17 17:41:44.947669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c9f580 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c8f500 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c7f480 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c6f400 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c5f380 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c4f300 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c3f280 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c2f200 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c1f180 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018c0f100 len:0x10000 key:0x183000 00:17:06.653 [2024-10-17 17:41:44.947968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.947986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191f0000 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.947999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191dff80 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191cff00 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191bfe80 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000191afe00 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001919fd80 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001918fd00 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001917fc80 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001916fc00 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001915fb80 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001914fb00 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001913fa80 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001912fa00 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001911f980 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001910f900 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190ff880 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190ef800 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190df780 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cf700 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bf680 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190af600 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909f580 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908f500 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907f480 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906f400 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905f380 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904f300 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903f280 len:0x10000 key:0x182800 00:17:06.653 [2024-10-17 17:41:44.948859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.653 [2024-10-17 17:41:44.948877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f200 len:0x10000 key:0x182800 00:17:06.654 [2024-10-17 17:41:44.948890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.948908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f180 len:0x10000 key:0x182800 00:17:06.654 [2024-10-17 17:41:44.948922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.948940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f100 len:0x10000 key:0x182800 00:17:06.654 [2024-10-17 17:41:44.948953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.948971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193f0000 len:0x10000 key:0x183600 00:17:06.654 [2024-10-17 17:41:44.948984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.949002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193dff80 len:0x10000 key:0x183600 00:17:06.654 [2024-10-17 17:41:44.949015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.949033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193cff00 len:0x10000 key:0x183600 00:17:06.654 [2024-10-17 17:41:44.949047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.949065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193bfe80 len:0x10000 key:0x183600 00:17:06.654 [2024-10-17 17:41:44.949078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.949098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000193afe00 len:0x10000 key:0x183600 00:17:06.654 [2024-10-17 17:41:44.949112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.949130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001939fd80 len:0x10000 key:0x183600 00:17:06.654 [2024-10-17 17:41:44.949143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.949161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eefe00 len:0x10000 key:0x181000 00:17:06.654 [2024-10-17 17:41:44.949176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:fc7b0000 sqhd:7250 p:0 m:0 dnr:0 00:17:06.654 [2024-10-17 17:41:44.966428] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200010601300 was disconnected and freed. reset controller. 00:17:06.654 [2024-10-17 17:41:44.966482] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.966650] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.966699] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.966744] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.966783] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.966825] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.966870] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.966909] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.966952] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.966996] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.967035] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:06.654 [2024-10-17 17:41:44.972727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:06.654 [2024-10-17 17:41:44.972752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:06.654 [2024-10-17 17:41:44.973484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:06.654 [2024-10-17 17:41:44.973500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:06.654 [2024-10-17 17:41:44.973511] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:06.654 [2024-10-17 17:41:44.973521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:06.654 [2024-10-17 17:41:44.975046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:17:06.654 task offset: 36352 on job bdev=Nvme1n1 fails 00:17:06.654 00:17:06.654 Latency(us) 00:17:06.654 [2024-10-17T15:41:45.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.654 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme1n1 ended in about 1.90 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme1n1 : 1.90 135.02 8.44 33.75 0.00 376319.82 34192.70 1043105.17 00:17:06.654 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme2n1 ended in about 1.90 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme2n1 : 1.90 159.74 9.98 33.74 0.00 325456.89 3875.17 1043105.17 00:17:06.654 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme3n1 ended in about 1.90 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme3n1 : 1.90 151.77 9.49 33.73 0.00 336563.14 15386.71 1043105.17 00:17:06.654 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme4n1 ended in about 1.90 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme4n1 : 1.90 146.96 9.19 33.71 0.00 342755.19 24618.74 1043105.17 00:17:06.654 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme5n1 ended in about 1.90 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme5n1 : 1.90 141.11 8.82 33.70 0.00 351374.41 28607.89 1035810.73 00:17:06.654 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme6n1 ended in about 1.90 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme6n1 : 1.90 134.74 8.42 33.69 0.00 361568.88 48097.73 1035810.73 00:17:06.654 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme7n1 ended in about 1.90 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme7n1 : 1.90 134.69 8.42 33.67 0.00 356213.09 63370.46 1094166.26 00:17:06.654 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme8n1 ended in about 1.87 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme8n1 : 1.87 136.95 8.56 34.24 0.00 350110.41 73856.22 1086871.82 00:17:06.654 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme9n1 ended in about 1.87 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme9n1 : 1.87 136.64 8.54 34.16 0.00 347880.76 58355.53 1079577.38 00:17:06.654 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:06.654 Job: Nvme10n1 ended in about 1.88 seconds with error 00:17:06.654 Verification LBA range: start 0x0 length 0x400 00:17:06.654 Nvme10n1 : 1.88 136.33 8.52 34.08 0.00 345472.31 43310.75 1064988.49 00:17:06.654 [2024-10-17T15:41:45.045Z] =================================================================================================================== 00:17:06.654 [2024-10-17T15:41:45.045Z] Total : 1413.95 88.37 338.47 0.00 348869.98 3875.17 1094166.26 00:17:06.654 [2024-10-17 17:41:44.996496] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:06.654 [2024-10-17 17:41:44.996531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:06.654 [2024-10-17 17:41:44.996563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:06.654 [2024-10-17 17:41:44.996575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:06.654 [2024-10-17 17:41:45.003910] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.654 [2024-10-17 17:41:45.003972] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.654 [2024-10-17 17:41:45.004001] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed000 00:17:06.654 [2024-10-17 17:41:45.004132] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.654 [2024-10-17 17:41:45.004166] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.654 [2024-10-17 17:41:45.004191] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168e5280 00:17:06.654 [2024-10-17 17:41:45.007935] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.654 [2024-10-17 17:41:45.007960] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.654 [2024-10-17 17:41:45.007973] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ba2c0 00:17:06.654 [2024-10-17 17:41:45.008050] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.654 [2024-10-17 17:41:45.008070] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.654 [2024-10-17 17:41:45.008081] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168b9ac0 00:17:06.654 [2024-10-17 17:41:45.008172] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.654 [2024-10-17 17:41:45.008187] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.654 [2024-10-17 17:41:45.008198] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168d20c0 00:17:06.654 [2024-10-17 17:41:45.008278] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.654 [2024-10-17 17:41:45.008293] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.654 [2024-10-17 17:41:45.008304] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168bf180 00:17:06.655 [2024-10-17 17:41:45.009231] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.655 [2024-10-17 17:41:45.009271] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.655 [2024-10-17 17:41:45.009295] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200016889000 00:17:06.655 [2024-10-17 17:41:45.009432] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.655 [2024-10-17 17:41:45.009468] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.655 [2024-10-17 17:41:45.009492] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001689a040 00:17:06.655 [2024-10-17 17:41:45.009603] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.655 [2024-10-17 17:41:45.009619] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.655 [2024-10-17 17:41:45.009629] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001689a640 00:17:06.655 [2024-10-17 17:41:45.009724] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:17:06.655 [2024-10-17 17:41:45.009739] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:17:06.655 [2024-10-17 17:41:45.009750] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168bf4c0 00:17:07.219 17:41:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 652278 00:17:07.219 17:41:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:17:07.219 17:41:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 652278 00:17:07.219 17:41:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:17:07.219 17:41:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.219 17:41:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:17:07.219 17:41:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.219 17:41:45 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 652278 00:17:07.785 [2024-10-17 17:41:46.008298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.785 [2024-10-17 17:41:46.008331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:07.785 [2024-10-17 17:41:46.009423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.785 [2024-10-17 17:41:46.009436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:07.785 [2024-10-17 17:41:46.009473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:07.785 [2024-10-17 17:41:46.009482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:07.785 [2024-10-17 17:41:46.009492] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:17:07.785 [2024-10-17 17:41:46.009506] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:07.785 [2024-10-17 17:41:46.009514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:07.785 [2024-10-17 17:41:46.009523] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] already in failed state 00:17:07.785 [2024-10-17 17:41:46.009543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.785 [2024-10-17 17:41:46.009554] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.785 [2024-10-17 17:41:46.011643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.785 [2024-10-17 17:41:46.011690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:07.785 [2024-10-17 17:41:46.013031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.785 [2024-10-17 17:41:46.013071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:07.786 [2024-10-17 17:41:46.014241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.786 [2024-10-17 17:41:46.014281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:07.786 [2024-10-17 17:41:46.015467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.786 [2024-10-17 17:41:46.015508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:07.786 [2024-10-17 17:41:46.016728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.786 [2024-10-17 17:41:46.016769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:07.786 [2024-10-17 17:41:46.017848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.786 [2024-10-17 17:41:46.017888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:07.786 [2024-10-17 17:41:46.018948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.786 [2024-10-17 17:41:46.018991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:07.786 [2024-10-17 17:41:46.019887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:07.786 [2024-10-17 17:41:46.019905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:07.786 [2024-10-17 17:41:46.019917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:07.786 [2024-10-17 17:41:46.019930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:07.786 [2024-10-17 17:41:46.019942] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] already in failed state 00:17:07.786 [2024-10-17 17:41:46.019965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:07.786 [2024-10-17 17:41:46.019978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:07.786 [2024-10-17 17:41:46.019991] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] already in failed state 00:17:07.786 [2024-10-17 17:41:46.020006] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:07.786 [2024-10-17 17:41:46.020018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:07.786 [2024-10-17 17:41:46.020030] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] already in failed state 00:17:07.786 [2024-10-17 17:41:46.020044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:17:07.786 [2024-10-17 17:41:46.020057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:17:07.786 [2024-10-17 17:41:46.020069] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] already in failed state 00:17:07.786 [2024-10-17 17:41:46.020195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.786 [2024-10-17 17:41:46.020213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.786 [2024-10-17 17:41:46.020228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.786 [2024-10-17 17:41:46.020243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.786 [2024-10-17 17:41:46.020256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:17:07.786 [2024-10-17 17:41:46.020269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:17:07.786 [2024-10-17 17:41:46.020282] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] already in failed state 00:17:07.786 [2024-10-17 17:41:46.020296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:17:07.786 [2024-10-17 17:41:46.020308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:17:07.786 [2024-10-17 17:41:46.020321] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] already in failed state 00:17:07.786 [2024-10-17 17:41:46.020336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:07.786 [2024-10-17 17:41:46.020348] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:07.786 [2024-10-17 17:41:46.020360] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] already in failed state 00:17:07.786 [2024-10-17 17:41:46.020375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:07.786 [2024-10-17 17:41:46.020387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:07.786 [2024-10-17 17:41:46.020400] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] already in failed state 00:17:07.786 [2024-10-17 17:41:46.020476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.786 [2024-10-17 17:41:46.020493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.786 [2024-10-17 17:41:46.020507] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:07.786 [2024-10-17 17:41:46.020521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:08.045 rmmod nvme_rdma 00:17:08.045 rmmod nvme_fabrics 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 652115 ']' 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 652115 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 652115 ']' 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 652115 00:17:08.045 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (652115) - No such process 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 652115 is not found' 00:17:08.045 Process with pid 652115 is not found 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:17:08.045 00:17:08.045 real 0m5.519s 00:17:08.045 user 0m16.217s 00:17:08.045 sys 0m1.352s 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:08.045 ************************************ 00:17:08.045 END TEST nvmf_shutdown_tc3 00:17:08.045 ************************************ 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ mlx5 == \e\8\1\0 ]] 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:08.045 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:08.045 ************************************ 00:17:08.046 START TEST nvmf_shutdown_tc4 00:17:08.046 ************************************ 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:17:08.046 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:17:08.046 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:08.046 Found net devices under 0000:18:00.0: mlx_0_0 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:08.046 Found net devices under 0000:18:00.1: mlx_0_1 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # rdma_device_init 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # uname 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:08.046 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@528 -- # allocate_nic_ips 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:08.306 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:08.306 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:17:08.306 altname enp24s0f0np0 00:17:08.306 altname ens785f0np0 00:17:08.306 inet 192.168.100.8/24 scope global mlx_0_0 00:17:08.306 valid_lft forever preferred_lft forever 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:08.306 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:08.306 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:17:08.306 altname enp24s0f1np1 00:17:08.306 altname ens785f1np1 00:17:08.306 inet 192.168.100.9/24 scope global mlx_0_1 00:17:08.306 valid_lft forever preferred_lft forever 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@109 -- # continue 2 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:17:08.306 192.168.100.9' 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:17:08.306 192.168.100.9' 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # head -n 1 00:17:08.306 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:17:08.307 192.168.100.9' 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # tail -n +2 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # head -n 1 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=653014 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 653014 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 653014 ']' 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:08.307 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:08.307 [2024-10-17 17:41:46.676696] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:08.307 [2024-10-17 17:41:46.676761] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.565 [2024-10-17 17:41:46.750674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.565 [2024-10-17 17:41:46.794789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.565 [2024-10-17 17:41:46.794832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.565 [2024-10-17 17:41:46.794842] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.565 [2024-10-17 17:41:46.794850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.565 [2024-10-17 17:41:46.794857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.565 [2024-10-17 17:41:46.796234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.565 [2024-10-17 17:41:46.796311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.565 [2024-10-17 17:41:46.796447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.565 [2024-10-17 17:41:46.796447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:08.565 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:08.565 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:17:08.565 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:08.565 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:08.565 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:08.565 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.565 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:08.565 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.565 17:41:46 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:08.823 [2024-10-17 17:41:46.957507] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18e35c0/0x18e7ab0) succeed. 00:17:08.823 [2024-10-17 17:41:46.968040] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18e4c50/0x1929150) succeed. 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.823 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:08.823 Malloc1 00:17:08.823 [2024-10-17 17:41:47.202411] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:09.081 Malloc2 00:17:09.081 Malloc3 00:17:09.081 Malloc4 00:17:09.081 Malloc5 00:17:09.081 Malloc6 00:17:09.081 Malloc7 00:17:09.339 Malloc8 00:17:09.339 Malloc9 00:17:09.339 Malloc10 00:17:09.339 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.339 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:17:09.339 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:09.339 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:09.339 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=653165 00:17:09.339 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:17:09.339 17:41:47 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' -P 4 00:17:09.597 [2024-10-17 17:41:47.741495] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 653014 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 653014 ']' 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 653014 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 653014 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 653014' 00:17:14.858 killing process with pid 653014 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 653014 00:17:14.858 17:41:52 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 653014 00:17:14.858 starting I/O failed: -6 00:17:14.858 NVMe io qpair process completion error 00:17:14.858 NVMe io qpair process completion error 00:17:14.858 NVMe io qpair process completion error 00:17:14.858 NVMe io qpair process completion error 00:17:14.858 NVMe io qpair process completion error 00:17:14.858 starting I/O failed: -6 00:17:14.858 starting I/O failed: -6 00:17:14.858 starting I/O failed: -6 00:17:14.858 starting I/O failed: -6 00:17:15.116 17:41:53 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:17:15.682 [2024-10-17 17:41:53.798997] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Submitting Keep Alive failed 00:17:15.682 [2024-10-17 17:41:53.800197] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Submitting Keep Alive failed 00:17:15.682 NVMe io qpair process completion error 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 [2024-10-17 17:41:53.800251] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 [2024-10-17 17:41:53.800289] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Submitting Keep Alive failed 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 starting I/O failed: -6 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.682 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 NVMe io qpair process completion error 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.683 Write completed with error (sct=0, sc=8) 00:17:15.683 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 starting I/O failed: -6 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.684 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 NVMe io qpair process completion error 00:17:15.685 NVMe io qpair process completion error 00:17:15.685 NVMe io qpair process completion error 00:17:15.685 NVMe io qpair process completion error 00:17:15.685 NVMe io qpair process completion error 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.685 Write completed with error (sct=0, sc=8) 00:17:15.943 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 653165 00:17:15.943 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:17:15.943 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 653165 00:17:15.943 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:17:15.943 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.943 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:17:15.943 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:15.943 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 653165 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 [2024-10-17 17:41:54.806407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.510 [2024-10-17 17:41:54.806492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:16.510 [2024-10-17 17:41:54.808052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.510 [2024-10-17 17:41:54.808096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:16.510 [2024-10-17 17:41:54.809888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.510 [2024-10-17 17:41:54.809930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 [2024-10-17 17:41:54.811790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.510 [2024-10-17 17:41:54.811829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 [2024-10-17 17:41:54.813820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.510 [2024-10-17 17:41:54.813860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.510 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 [2024-10-17 17:41:54.815786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 [2024-10-17 17:41:54.815829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 [2024-10-17 17:41:54.817201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.511 [2024-10-17 17:41:54.817245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:16.511 [2024-10-17 17:41:54.819110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.511 [2024-10-17 17:41:54.819151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:16.511 [2024-10-17 17:41:54.820889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.511 [2024-10-17 17:41:54.820938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:16.511 [2024-10-17 17:41:54.822584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:16.511 [2024-10-17 17:41:54.822623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.511 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.512 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.513 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Write completed with error (sct=0, sc=8) 00:17:16.514 Initializing NVMe Controllers 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode6 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode3 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode7 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode2 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode10 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode4 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode5 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode8 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode9 00:17:16.514 Controller IO queue size 128, less than required. 00:17:16.514 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:17:16.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:17:16.514 Initialization complete. Launching workers. 00:17:16.514 ======================================================== 00:17:16.514 Latency(us) 00:17:16.514 Device Information : IOPS MiB/s Average min max 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1495.93 64.28 99903.48 520.78 2160693.34 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1473.86 63.33 85756.11 40493.71 1135761.44 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1503.85 64.62 98835.87 126.76 2090338.65 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1469.99 63.16 86387.07 34042.08 1155839.17 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1461.06 62.78 86986.29 38149.63 1165538.06 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1459.54 62.71 87150.84 42063.23 1173835.89 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1472.85 63.29 101088.65 24906.63 2147729.96 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1475.04 63.38 100999.46 1157.59 2160222.63 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1500.65 64.48 99331.38 149.67 2096611.32 00:17:16.514 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1525.75 65.56 97755.91 95.73 2037323.81 00:17:16.514 ======================================================== 00:17:16.514 Total : 14838.51 637.59 94472.71 95.73 2160693.34 00:17:16.514 00:17:16.514 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:16.773 rmmod nvme_rdma 00:17:16.773 rmmod nvme_fabrics 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 653014 ']' 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 653014 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 653014 ']' 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 653014 00:17:16.773 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (653014) - No such process 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 653014 is not found' 00:17:16.773 Process with pid 653014 is not found 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:17:16.773 00:17:16.773 real 0m8.603s 00:17:16.773 user 0m32.190s 00:17:16.773 sys 0m1.392s 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.773 17:41:54 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:17:16.773 ************************************ 00:17:16.773 END TEST nvmf_shutdown_tc4 00:17:16.773 ************************************ 00:17:16.773 17:41:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:17:16.773 00:17:16.773 real 0m32.131s 00:17:16.773 user 1m37.043s 00:17:16.773 sys 0m10.144s 00:17:16.773 17:41:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.773 17:41:55 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:16.773 ************************************ 00:17:16.773 END TEST nvmf_shutdown 00:17:16.773 ************************************ 00:17:16.773 17:41:55 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:16.773 00:17:16.773 real 8m25.265s 00:17:16.773 user 21m41.383s 00:17:16.773 sys 2m9.737s 00:17:16.773 17:41:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.773 17:41:55 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.773 ************************************ 00:17:16.773 END TEST nvmf_target_extra 00:17:16.773 ************************************ 00:17:16.773 17:41:55 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:17:16.773 17:41:55 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:16.773 17:41:55 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.773 17:41:55 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:17:16.773 ************************************ 00:17:16.773 START TEST nvmf_host 00:17:16.773 ************************************ 00:17:16.773 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:17:17.031 * Looking for test storage... 00:17:17.031 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.031 --rc genhtml_branch_coverage=1 00:17:17.031 --rc genhtml_function_coverage=1 00:17:17.031 --rc genhtml_legend=1 00:17:17.031 --rc geninfo_all_blocks=1 00:17:17.031 --rc geninfo_unexecuted_blocks=1 00:17:17.031 00:17:17.031 ' 00:17:17.031 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:17.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.031 --rc genhtml_branch_coverage=1 00:17:17.031 --rc genhtml_function_coverage=1 00:17:17.032 --rc genhtml_legend=1 00:17:17.032 --rc geninfo_all_blocks=1 00:17:17.032 --rc geninfo_unexecuted_blocks=1 00:17:17.032 00:17:17.032 ' 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:17.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.032 --rc genhtml_branch_coverage=1 00:17:17.032 --rc genhtml_function_coverage=1 00:17:17.032 --rc genhtml_legend=1 00:17:17.032 --rc geninfo_all_blocks=1 00:17:17.032 --rc geninfo_unexecuted_blocks=1 00:17:17.032 00:17:17.032 ' 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:17.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.032 --rc genhtml_branch_coverage=1 00:17:17.032 --rc genhtml_function_coverage=1 00:17:17.032 --rc genhtml_legend=1 00:17:17.032 --rc geninfo_all_blocks=1 00:17:17.032 --rc geninfo_unexecuted_blocks=1 00:17:17.032 00:17:17.032 ' 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.032 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.032 ************************************ 00:17:17.032 START TEST nvmf_multicontroller 00:17:17.032 ************************************ 00:17:17.032 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:17:17.291 * Looking for test storage... 00:17:17.291 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:17:17.291 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:17.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.292 --rc genhtml_branch_coverage=1 00:17:17.292 --rc genhtml_function_coverage=1 00:17:17.292 --rc genhtml_legend=1 00:17:17.292 --rc geninfo_all_blocks=1 00:17:17.292 --rc geninfo_unexecuted_blocks=1 00:17:17.292 00:17:17.292 ' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:17.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.292 --rc genhtml_branch_coverage=1 00:17:17.292 --rc genhtml_function_coverage=1 00:17:17.292 --rc genhtml_legend=1 00:17:17.292 --rc geninfo_all_blocks=1 00:17:17.292 --rc geninfo_unexecuted_blocks=1 00:17:17.292 00:17:17.292 ' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:17.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.292 --rc genhtml_branch_coverage=1 00:17:17.292 --rc genhtml_function_coverage=1 00:17:17.292 --rc genhtml_legend=1 00:17:17.292 --rc geninfo_all_blocks=1 00:17:17.292 --rc geninfo_unexecuted_blocks=1 00:17:17.292 00:17:17.292 ' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:17.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.292 --rc genhtml_branch_coverage=1 00:17:17.292 --rc genhtml_function_coverage=1 00:17:17.292 --rc genhtml_legend=1 00:17:17.292 --rc geninfo_all_blocks=1 00:17:17.292 --rc geninfo_unexecuted_blocks=1 00:17:17.292 00:17:17.292 ' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.292 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:17:17.292 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:17:17.292 00:17:17.292 real 0m0.210s 00:17:17.292 user 0m0.131s 00:17:17.292 sys 0m0.093s 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:17.292 ************************************ 00:17:17.292 END TEST nvmf_multicontroller 00:17:17.292 ************************************ 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:17.292 17:41:55 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:17.552 ************************************ 00:17:17.552 START TEST nvmf_aer 00:17:17.552 ************************************ 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:17:17.552 * Looking for test storage... 00:17:17.552 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:17:17.552 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:17.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.553 --rc genhtml_branch_coverage=1 00:17:17.553 --rc genhtml_function_coverage=1 00:17:17.553 --rc genhtml_legend=1 00:17:17.553 --rc geninfo_all_blocks=1 00:17:17.553 --rc geninfo_unexecuted_blocks=1 00:17:17.553 00:17:17.553 ' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:17.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.553 --rc genhtml_branch_coverage=1 00:17:17.553 --rc genhtml_function_coverage=1 00:17:17.553 --rc genhtml_legend=1 00:17:17.553 --rc geninfo_all_blocks=1 00:17:17.553 --rc geninfo_unexecuted_blocks=1 00:17:17.553 00:17:17.553 ' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:17.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.553 --rc genhtml_branch_coverage=1 00:17:17.553 --rc genhtml_function_coverage=1 00:17:17.553 --rc genhtml_legend=1 00:17:17.553 --rc geninfo_all_blocks=1 00:17:17.553 --rc geninfo_unexecuted_blocks=1 00:17:17.553 00:17:17.553 ' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:17.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:17.553 --rc genhtml_branch_coverage=1 00:17:17.553 --rc genhtml_function_coverage=1 00:17:17.553 --rc genhtml_legend=1 00:17:17.553 --rc geninfo_all_blocks=1 00:17:17.553 --rc geninfo_unexecuted_blocks=1 00:17:17.553 00:17:17.553 ' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:17.553 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:17:17.553 17:41:55 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:17:24.115 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:24.115 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:17:24.116 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:24.116 Found net devices under 0000:18:00.0: mlx_0_0 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:24.116 Found net devices under 0000:18:00.1: mlx_0_1 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # rdma_device_init 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # uname 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@528 -- # allocate_nic_ips 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:24.116 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.116 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:17:24.116 altname enp24s0f0np0 00:17:24.116 altname ens785f0np0 00:17:24.116 inet 192.168.100.8/24 scope global mlx_0_0 00:17:24.116 valid_lft forever preferred_lft forever 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:24.116 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:24.375 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.375 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:17:24.375 altname enp24s0f1np1 00:17:24.375 altname ens785f1np1 00:17:24.375 inet 192.168.100.9/24 scope global mlx_0_1 00:17:24.375 valid_lft forever preferred_lft forever 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@109 -- # continue 2 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:17:24.375 192.168.100.9' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:17:24.375 192.168.100.9' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # head -n 1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:17:24.375 192.168.100.9' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # tail -n +2 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # head -n 1 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=657435 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 657435 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 657435 ']' 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:24.375 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.375 [2024-10-17 17:42:02.688331] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:24.375 [2024-10-17 17:42:02.688403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.375 [2024-10-17 17:42:02.762256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:24.634 [2024-10-17 17:42:02.809337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.634 [2024-10-17 17:42:02.809380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.634 [2024-10-17 17:42:02.809389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.634 [2024-10-17 17:42:02.809414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.634 [2024-10-17 17:42:02.809426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.634 [2024-10-17 17:42:02.810759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.634 [2024-10-17 17:42:02.810780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:24.634 [2024-10-17 17:42:02.810857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:24.634 [2024-10-17 17:42:02.810859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.634 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:24.634 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:17:24.634 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:24.634 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:24.634 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.634 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:24.634 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:24.634 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.635 17:42:02 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.635 [2024-10-17 17:42:02.992196] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a322c0/0x1a367b0) succeed. 00:17:24.635 [2024-10-17 17:42:03.002681] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a33950/0x1a77e50) succeed. 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.893 Malloc0 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.893 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.894 [2024-10-17 17:42:03.185218] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:24.894 [ 00:17:24.894 { 00:17:24.894 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:24.894 "subtype": "Discovery", 00:17:24.894 "listen_addresses": [], 00:17:24.894 "allow_any_host": true, 00:17:24.894 "hosts": [] 00:17:24.894 }, 00:17:24.894 { 00:17:24.894 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.894 "subtype": "NVMe", 00:17:24.894 "listen_addresses": [ 00:17:24.894 { 00:17:24.894 "trtype": "RDMA", 00:17:24.894 "adrfam": "IPv4", 00:17:24.894 "traddr": "192.168.100.8", 00:17:24.894 "trsvcid": "4420" 00:17:24.894 } 00:17:24.894 ], 00:17:24.894 "allow_any_host": true, 00:17:24.894 "hosts": [], 00:17:24.894 "serial_number": "SPDK00000000000001", 00:17:24.894 "model_number": "SPDK bdev Controller", 00:17:24.894 "max_namespaces": 2, 00:17:24.894 "min_cntlid": 1, 00:17:24.894 "max_cntlid": 65519, 00:17:24.894 "namespaces": [ 00:17:24.894 { 00:17:24.894 "nsid": 1, 00:17:24.894 "bdev_name": "Malloc0", 00:17:24.894 "name": "Malloc0", 00:17:24.894 "nguid": "71C0F0B5097E45848D2A8E43AFBF2EB8", 00:17:24.894 "uuid": "71c0f0b5-097e-4584-8d2a-8e43afbf2eb8" 00:17:24.894 } 00:17:24.894 ] 00:17:24.894 } 00:17:24.894 ] 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=657470 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:17:24.894 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.153 Malloc1 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.153 [ 00:17:25.153 { 00:17:25.153 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.153 "subtype": "Discovery", 00:17:25.153 "listen_addresses": [], 00:17:25.153 "allow_any_host": true, 00:17:25.153 "hosts": [] 00:17:25.153 }, 00:17:25.153 { 00:17:25.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:25.153 "subtype": "NVMe", 00:17:25.153 "listen_addresses": [ 00:17:25.153 { 00:17:25.153 "trtype": "RDMA", 00:17:25.153 "adrfam": "IPv4", 00:17:25.153 "traddr": "192.168.100.8", 00:17:25.153 "trsvcid": "4420" 00:17:25.153 } 00:17:25.153 ], 00:17:25.153 "allow_any_host": true, 00:17:25.153 "hosts": [], 00:17:25.153 "serial_number": "SPDK00000000000001", 00:17:25.153 "model_number": "SPDK bdev Controller", 00:17:25.153 "max_namespaces": 2, 00:17:25.153 "min_cntlid": 1, 00:17:25.153 "max_cntlid": 65519, 00:17:25.153 "namespaces": [ 00:17:25.153 { 00:17:25.153 "nsid": 1, 00:17:25.153 "bdev_name": "Malloc0", 00:17:25.153 "name": "Malloc0", 00:17:25.153 "nguid": "71C0F0B5097E45848D2A8E43AFBF2EB8", 00:17:25.153 "uuid": "71c0f0b5-097e-4584-8d2a-8e43afbf2eb8" 00:17:25.153 }, 00:17:25.153 { 00:17:25.153 "nsid": 2, 00:17:25.153 "bdev_name": "Malloc1", 00:17:25.153 "name": "Malloc1", 00:17:25.153 "nguid": "82365E3C2FE1475B9AAC5299BD53F0A5", 00:17:25.153 "uuid": "82365e3c-2fe1-475b-9aac-5299bd53f0a5" 00:17:25.153 } 00:17:25.153 ] 00:17:25.153 } 00:17:25.153 ] 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 657470 00:17:25.153 Asynchronous Event Request test 00:17:25.153 Attaching to 192.168.100.8 00:17:25.153 Attached to 192.168.100.8 00:17:25.153 Registering asynchronous event callbacks... 00:17:25.153 Starting namespace attribute notice tests for all controllers... 00:17:25.153 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:25.153 aer_cb - Changed Namespace 00:17:25.153 Cleaning up... 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.153 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:25.413 rmmod nvme_rdma 00:17:25.413 rmmod nvme_fabrics 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 657435 ']' 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 657435 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 657435 ']' 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 657435 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 657435 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 657435' 00:17:25.413 killing process with pid 657435 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 657435 00:17:25.413 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 657435 00:17:25.672 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:25.672 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:17:25.672 00:17:25.672 real 0m8.273s 00:17:25.672 user 0m6.313s 00:17:25.672 sys 0m5.751s 00:17:25.672 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:25.672 17:42:03 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:25.672 ************************************ 00:17:25.672 END TEST nvmf_aer 00:17:25.672 ************************************ 00:17:25.672 17:42:04 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:25.672 17:42:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:25.672 17:42:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:25.672 17:42:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:25.672 ************************************ 00:17:25.672 START TEST nvmf_async_init 00:17:25.672 ************************************ 00:17:25.672 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:17:25.933 * Looking for test storage... 00:17:25.933 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:25.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.933 --rc genhtml_branch_coverage=1 00:17:25.933 --rc genhtml_function_coverage=1 00:17:25.933 --rc genhtml_legend=1 00:17:25.933 --rc geninfo_all_blocks=1 00:17:25.933 --rc geninfo_unexecuted_blocks=1 00:17:25.933 00:17:25.933 ' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:25.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.933 --rc genhtml_branch_coverage=1 00:17:25.933 --rc genhtml_function_coverage=1 00:17:25.933 --rc genhtml_legend=1 00:17:25.933 --rc geninfo_all_blocks=1 00:17:25.933 --rc geninfo_unexecuted_blocks=1 00:17:25.933 00:17:25.933 ' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:25.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.933 --rc genhtml_branch_coverage=1 00:17:25.933 --rc genhtml_function_coverage=1 00:17:25.933 --rc genhtml_legend=1 00:17:25.933 --rc geninfo_all_blocks=1 00:17:25.933 --rc geninfo_unexecuted_blocks=1 00:17:25.933 00:17:25.933 ' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:25.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.933 --rc genhtml_branch_coverage=1 00:17:25.933 --rc genhtml_function_coverage=1 00:17:25.933 --rc genhtml_legend=1 00:17:25.933 --rc geninfo_all_blocks=1 00:17:25.933 --rc geninfo_unexecuted_blocks=1 00:17:25.933 00:17:25.933 ' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:25.933 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:25.933 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7f71dedf2e8d483ebe5b57ae87b540ba 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:17:25.934 17:42:04 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:17:32.503 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:17:32.503 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:32.503 Found net devices under 0000:18:00.0: mlx_0_0 00:17:32.503 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:32.504 Found net devices under 0000:18:00.1: mlx_0_1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # rdma_device_init 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # uname 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@528 -- # allocate_nic_ips 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:32.504 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:32.504 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:17:32.504 altname enp24s0f0np0 00:17:32.504 altname ens785f0np0 00:17:32.504 inet 192.168.100.8/24 scope global mlx_0_0 00:17:32.504 valid_lft forever preferred_lft forever 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:32.504 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:32.504 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:17:32.504 altname enp24s0f1np1 00:17:32.504 altname ens785f1np1 00:17:32.504 inet 192.168.100.9/24 scope global mlx_0_1 00:17:32.504 valid_lft forever preferred_lft forever 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@109 -- # continue 2 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:17:32.504 192.168.100.9' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:17:32.504 192.168.100.9' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # head -n 1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:17:32.504 192.168.100.9' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # tail -n +2 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # head -n 1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=660976 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 660976 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 660976 ']' 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.504 [2024-10-17 17:42:10.642639] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:32.504 [2024-10-17 17:42:10.642704] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:32.504 [2024-10-17 17:42:10.716029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.504 [2024-10-17 17:42:10.761556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:32.504 [2024-10-17 17:42:10.761602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:32.504 [2024-10-17 17:42:10.761612] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:32.504 [2024-10-17 17:42:10.761621] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:32.504 [2024-10-17 17:42:10.761628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:32.504 [2024-10-17 17:42:10.762087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:32.504 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 [2024-10-17 17:42:10.935099] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa60080/0xa64570) succeed. 00:17:32.762 [2024-10-17 17:42:10.944137] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa61530/0xaa5c10) succeed. 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 null0 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 17:42:10 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7f71dedf2e8d483ebe5b57ae87b540ba 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 [2024-10-17 17:42:11.013133] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 nvme0n1 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 [ 00:17:32.762 { 00:17:32.762 "name": "nvme0n1", 00:17:32.762 "aliases": [ 00:17:32.762 "7f71dedf-2e8d-483e-be5b-57ae87b540ba" 00:17:32.762 ], 00:17:32.762 "product_name": "NVMe disk", 00:17:32.762 "block_size": 512, 00:17:32.762 "num_blocks": 2097152, 00:17:32.762 "uuid": "7f71dedf-2e8d-483e-be5b-57ae87b540ba", 00:17:32.762 "numa_id": 0, 00:17:32.762 "assigned_rate_limits": { 00:17:32.762 "rw_ios_per_sec": 0, 00:17:32.762 "rw_mbytes_per_sec": 0, 00:17:32.762 "r_mbytes_per_sec": 0, 00:17:32.762 "w_mbytes_per_sec": 0 00:17:32.762 }, 00:17:32.762 "claimed": false, 00:17:32.762 "zoned": false, 00:17:32.762 "supported_io_types": { 00:17:32.762 "read": true, 00:17:32.762 "write": true, 00:17:32.762 "unmap": false, 00:17:32.762 "flush": true, 00:17:32.762 "reset": true, 00:17:32.762 "nvme_admin": true, 00:17:32.762 "nvme_io": true, 00:17:32.762 "nvme_io_md": false, 00:17:32.762 "write_zeroes": true, 00:17:32.762 "zcopy": false, 00:17:32.762 "get_zone_info": false, 00:17:32.762 "zone_management": false, 00:17:32.762 "zone_append": false, 00:17:32.762 "compare": true, 00:17:32.762 "compare_and_write": true, 00:17:32.762 "abort": true, 00:17:32.762 "seek_hole": false, 00:17:32.762 "seek_data": false, 00:17:32.762 "copy": true, 00:17:32.762 "nvme_iov_md": false 00:17:32.762 }, 00:17:32.762 "memory_domains": [ 00:17:32.762 { 00:17:32.762 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:32.762 "dma_device_type": 0 00:17:32.762 } 00:17:32.762 ], 00:17:32.762 "driver_specific": { 00:17:32.762 "nvme": [ 00:17:32.762 { 00:17:32.762 "trid": { 00:17:32.762 "trtype": "RDMA", 00:17:32.762 "adrfam": "IPv4", 00:17:32.762 "traddr": "192.168.100.8", 00:17:32.762 "trsvcid": "4420", 00:17:32.762 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:32.762 }, 00:17:32.762 "ctrlr_data": { 00:17:32.762 "cntlid": 1, 00:17:32.762 "vendor_id": "0x8086", 00:17:32.762 "model_number": "SPDK bdev Controller", 00:17:32.762 "serial_number": "00000000000000000000", 00:17:32.762 "firmware_revision": "25.01", 00:17:32.762 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:32.762 "oacs": { 00:17:32.762 "security": 0, 00:17:32.762 "format": 0, 00:17:32.762 "firmware": 0, 00:17:32.762 "ns_manage": 0 00:17:32.762 }, 00:17:32.762 "multi_ctrlr": true, 00:17:32.762 "ana_reporting": false 00:17:32.762 }, 00:17:32.762 "vs": { 00:17:32.762 "nvme_version": "1.3" 00:17:32.762 }, 00:17:32.762 "ns_data": { 00:17:32.762 "id": 1, 00:17:32.762 "can_share": true 00:17:32.762 } 00:17:32.762 } 00:17:32.762 ], 00:17:32.762 "mp_policy": "active_passive" 00:17:32.762 } 00:17:32.762 } 00:17:32.762 ] 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.762 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:32.762 [2024-10-17 17:42:11.117204] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:32.762 [2024-10-17 17:42:11.134382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:33.020 [2024-10-17 17:42:11.156697] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:33.020 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.020 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:33.020 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.020 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.020 [ 00:17:33.020 { 00:17:33.020 "name": "nvme0n1", 00:17:33.020 "aliases": [ 00:17:33.020 "7f71dedf-2e8d-483e-be5b-57ae87b540ba" 00:17:33.020 ], 00:17:33.020 "product_name": "NVMe disk", 00:17:33.020 "block_size": 512, 00:17:33.020 "num_blocks": 2097152, 00:17:33.020 "uuid": "7f71dedf-2e8d-483e-be5b-57ae87b540ba", 00:17:33.020 "numa_id": 0, 00:17:33.020 "assigned_rate_limits": { 00:17:33.020 "rw_ios_per_sec": 0, 00:17:33.020 "rw_mbytes_per_sec": 0, 00:17:33.020 "r_mbytes_per_sec": 0, 00:17:33.020 "w_mbytes_per_sec": 0 00:17:33.020 }, 00:17:33.020 "claimed": false, 00:17:33.020 "zoned": false, 00:17:33.020 "supported_io_types": { 00:17:33.020 "read": true, 00:17:33.020 "write": true, 00:17:33.020 "unmap": false, 00:17:33.020 "flush": true, 00:17:33.021 "reset": true, 00:17:33.021 "nvme_admin": true, 00:17:33.021 "nvme_io": true, 00:17:33.021 "nvme_io_md": false, 00:17:33.021 "write_zeroes": true, 00:17:33.021 "zcopy": false, 00:17:33.021 "get_zone_info": false, 00:17:33.021 "zone_management": false, 00:17:33.021 "zone_append": false, 00:17:33.021 "compare": true, 00:17:33.021 "compare_and_write": true, 00:17:33.021 "abort": true, 00:17:33.021 "seek_hole": false, 00:17:33.021 "seek_data": false, 00:17:33.021 "copy": true, 00:17:33.021 "nvme_iov_md": false 00:17:33.021 }, 00:17:33.021 "memory_domains": [ 00:17:33.021 { 00:17:33.021 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:33.021 "dma_device_type": 0 00:17:33.021 } 00:17:33.021 ], 00:17:33.021 "driver_specific": { 00:17:33.021 "nvme": [ 00:17:33.021 { 00:17:33.021 "trid": { 00:17:33.021 "trtype": "RDMA", 00:17:33.021 "adrfam": "IPv4", 00:17:33.021 "traddr": "192.168.100.8", 00:17:33.021 "trsvcid": "4420", 00:17:33.021 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:33.021 }, 00:17:33.021 "ctrlr_data": { 00:17:33.021 "cntlid": 2, 00:17:33.021 "vendor_id": "0x8086", 00:17:33.021 "model_number": "SPDK bdev Controller", 00:17:33.021 "serial_number": "00000000000000000000", 00:17:33.021 "firmware_revision": "25.01", 00:17:33.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.021 "oacs": { 00:17:33.021 "security": 0, 00:17:33.021 "format": 0, 00:17:33.021 "firmware": 0, 00:17:33.021 "ns_manage": 0 00:17:33.021 }, 00:17:33.021 "multi_ctrlr": true, 00:17:33.021 "ana_reporting": false 00:17:33.021 }, 00:17:33.021 "vs": { 00:17:33.021 "nvme_version": "1.3" 00:17:33.021 }, 00:17:33.021 "ns_data": { 00:17:33.021 "id": 1, 00:17:33.021 "can_share": true 00:17:33.021 } 00:17:33.021 } 00:17:33.021 ], 00:17:33.021 "mp_policy": "active_passive" 00:17:33.021 } 00:17:33.021 } 00:17:33.021 ] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.GM2Y7TpqgQ 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.GM2Y7TpqgQ 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.GM2Y7TpqgQ 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 [2024-10-17 17:42:11.239428] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 [2024-10-17 17:42:11.259478] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:33.021 nvme0n1 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.021 [ 00:17:33.021 { 00:17:33.021 "name": "nvme0n1", 00:17:33.021 "aliases": [ 00:17:33.021 "7f71dedf-2e8d-483e-be5b-57ae87b540ba" 00:17:33.021 ], 00:17:33.021 "product_name": "NVMe disk", 00:17:33.021 "block_size": 512, 00:17:33.021 "num_blocks": 2097152, 00:17:33.021 "uuid": "7f71dedf-2e8d-483e-be5b-57ae87b540ba", 00:17:33.021 "numa_id": 0, 00:17:33.021 "assigned_rate_limits": { 00:17:33.021 "rw_ios_per_sec": 0, 00:17:33.021 "rw_mbytes_per_sec": 0, 00:17:33.021 "r_mbytes_per_sec": 0, 00:17:33.021 "w_mbytes_per_sec": 0 00:17:33.021 }, 00:17:33.021 "claimed": false, 00:17:33.021 "zoned": false, 00:17:33.021 "supported_io_types": { 00:17:33.021 "read": true, 00:17:33.021 "write": true, 00:17:33.021 "unmap": false, 00:17:33.021 "flush": true, 00:17:33.021 "reset": true, 00:17:33.021 "nvme_admin": true, 00:17:33.021 "nvme_io": true, 00:17:33.021 "nvme_io_md": false, 00:17:33.021 "write_zeroes": true, 00:17:33.021 "zcopy": false, 00:17:33.021 "get_zone_info": false, 00:17:33.021 "zone_management": false, 00:17:33.021 "zone_append": false, 00:17:33.021 "compare": true, 00:17:33.021 "compare_and_write": true, 00:17:33.021 "abort": true, 00:17:33.021 "seek_hole": false, 00:17:33.021 "seek_data": false, 00:17:33.021 "copy": true, 00:17:33.021 "nvme_iov_md": false 00:17:33.021 }, 00:17:33.021 "memory_domains": [ 00:17:33.021 { 00:17:33.021 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:17:33.021 "dma_device_type": 0 00:17:33.021 } 00:17:33.021 ], 00:17:33.021 "driver_specific": { 00:17:33.021 "nvme": [ 00:17:33.021 { 00:17:33.021 "trid": { 00:17:33.021 "trtype": "RDMA", 00:17:33.021 "adrfam": "IPv4", 00:17:33.021 "traddr": "192.168.100.8", 00:17:33.021 "trsvcid": "4421", 00:17:33.021 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:33.021 }, 00:17:33.021 "ctrlr_data": { 00:17:33.021 "cntlid": 3, 00:17:33.021 "vendor_id": "0x8086", 00:17:33.021 "model_number": "SPDK bdev Controller", 00:17:33.021 "serial_number": "00000000000000000000", 00:17:33.021 "firmware_revision": "25.01", 00:17:33.021 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.021 "oacs": { 00:17:33.021 "security": 0, 00:17:33.021 "format": 0, 00:17:33.021 "firmware": 0, 00:17:33.021 "ns_manage": 0 00:17:33.021 }, 00:17:33.021 "multi_ctrlr": true, 00:17:33.021 "ana_reporting": false 00:17:33.021 }, 00:17:33.021 "vs": { 00:17:33.021 "nvme_version": "1.3" 00:17:33.021 }, 00:17:33.021 "ns_data": { 00:17:33.021 "id": 1, 00:17:33.021 "can_share": true 00:17:33.021 } 00:17:33.021 } 00:17:33.021 ], 00:17:33.021 "mp_policy": "active_passive" 00:17:33.021 } 00:17:33.021 } 00:17:33.021 ] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.021 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.GM2Y7TpqgQ 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:33.022 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:17:33.022 rmmod nvme_rdma 00:17:33.022 rmmod nvme_fabrics 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 660976 ']' 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 660976 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 660976 ']' 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 660976 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 660976 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 660976' 00:17:33.280 killing process with pid 660976 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 660976 00:17:33.280 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 660976 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:17:33.537 00:17:33.537 real 0m7.629s 00:17:33.537 user 0m2.934s 00:17:33.537 sys 0m5.293s 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:33.537 ************************************ 00:17:33.537 END TEST nvmf_async_init 00:17:33.537 ************************************ 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:33.537 ************************************ 00:17:33.537 START TEST dma 00:17:33.537 ************************************ 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:17:33.537 * Looking for test storage... 00:17:33.537 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.537 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:17:33.538 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:33.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.796 --rc genhtml_branch_coverage=1 00:17:33.796 --rc genhtml_function_coverage=1 00:17:33.796 --rc genhtml_legend=1 00:17:33.796 --rc geninfo_all_blocks=1 00:17:33.796 --rc geninfo_unexecuted_blocks=1 00:17:33.796 00:17:33.796 ' 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:33.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.796 --rc genhtml_branch_coverage=1 00:17:33.796 --rc genhtml_function_coverage=1 00:17:33.796 --rc genhtml_legend=1 00:17:33.796 --rc geninfo_all_blocks=1 00:17:33.796 --rc geninfo_unexecuted_blocks=1 00:17:33.796 00:17:33.796 ' 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:33.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.796 --rc genhtml_branch_coverage=1 00:17:33.796 --rc genhtml_function_coverage=1 00:17:33.796 --rc genhtml_legend=1 00:17:33.796 --rc geninfo_all_blocks=1 00:17:33.796 --rc geninfo_unexecuted_blocks=1 00:17:33.796 00:17:33.796 ' 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:33.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.796 --rc genhtml_branch_coverage=1 00:17:33.796 --rc genhtml_function_coverage=1 00:17:33.796 --rc genhtml_legend=1 00:17:33.796 --rc geninfo_all_blocks=1 00:17:33.796 --rc geninfo_unexecuted_blocks=1 00:17:33.796 00:17:33.796 ' 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.796 17:42:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.797 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.797 17:42:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # e810=() 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # x722=() 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # mlx=() 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:17:40.367 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:17:40.368 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:17:40.368 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:17:40.368 Found net devices under 0000:18:00.0: mlx_0_0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:17:40.368 Found net devices under 0000:18:00.1: mlx_0_1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@440 -- # is_hw=yes 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # rdma_device_init 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # uname 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe ib_core 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@528 -- # allocate_nic_ips 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:17:40.368 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:40.368 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:17:40.368 altname enp24s0f0np0 00:17:40.368 altname ens785f0np0 00:17:40.368 inet 192.168.100.8/24 scope global mlx_0_0 00:17:40.368 valid_lft forever preferred_lft forever 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:17:40.368 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:40.368 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:17:40.368 altname enp24s0f1np1 00:17:40.368 altname ens785f1np1 00:17:40.368 inet 192.168.100.9/24 scope global mlx_0_1 00:17:40.368 valid_lft forever preferred_lft forever 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # return 0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@109 -- # continue 2 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:17:40.368 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:17:40.369 192.168.100.9' 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # head -n 1 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:17:40.369 192.168.100.9' 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:17:40.369 192.168.100.9' 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # tail -n +2 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # head -n 1 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@507 -- # nvmfpid=663945 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@508 -- # waitforlisten 663945 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 663945 ']' 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.369 [2024-10-17 17:42:18.406162] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:40.369 [2024-10-17 17:42:18.406219] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.369 [2024-10-17 17:42:18.480945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:40.369 [2024-10-17 17:42:18.530045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.369 [2024-10-17 17:42:18.530084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.369 [2024-10-17 17:42:18.530095] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.369 [2024-10-17 17:42:18.530105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.369 [2024-10-17 17:42:18.530113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.369 [2024-10-17 17:42:18.531236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.369 [2024-10-17 17:42:18.531239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.369 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.369 [2024-10-17 17:42:18.695861] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7b7bc0/0x7bc0b0) succeed. 00:17:40.369 [2024-10-17 17:42:18.704909] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7b9110/0x7fd750) succeed. 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.628 Malloc0 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:17:40.628 [2024-10-17 17:42:18.873837] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # config=() 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # local subsystem config 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:17:40.628 { 00:17:40.628 "params": { 00:17:40.628 "name": "Nvme$subsystem", 00:17:40.628 "trtype": "$TEST_TRANSPORT", 00:17:40.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.628 "adrfam": "ipv4", 00:17:40.628 "trsvcid": "$NVMF_PORT", 00:17:40.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.628 "hdgst": ${hdgst:-false}, 00:17:40.628 "ddgst": ${ddgst:-false} 00:17:40.628 }, 00:17:40.628 "method": "bdev_nvme_attach_controller" 00:17:40.628 } 00:17:40.628 EOF 00:17:40.628 )") 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@580 -- # cat 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@582 -- # jq . 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@583 -- # IFS=, 00:17:40.628 17:42:18 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:17:40.628 "params": { 00:17:40.628 "name": "Nvme0", 00:17:40.629 "trtype": "rdma", 00:17:40.629 "traddr": "192.168.100.8", 00:17:40.629 "adrfam": "ipv4", 00:17:40.629 "trsvcid": "4420", 00:17:40.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:40.629 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:40.629 "hdgst": false, 00:17:40.629 "ddgst": false 00:17:40.629 }, 00:17:40.629 "method": "bdev_nvme_attach_controller" 00:17:40.629 }' 00:17:40.629 [2024-10-17 17:42:18.924439] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:40.629 [2024-10-17 17:42:18.924490] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664132 ] 00:17:40.629 [2024-10-17 17:42:18.993485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:40.888 [2024-10-17 17:42:19.040240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.888 [2024-10-17 17:42:19.040243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.304 bdev Nvme0n1 reports 1 memory domains 00:17:46.304 bdev Nvme0n1 supports RDMA memory domain 00:17:46.304 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:46.304 ========================================================================== 00:17:46.304 Latency [us] 00:17:46.304 IOPS MiB/s Average min max 00:17:46.304 Core 2: 20943.00 81.81 763.32 253.80 8506.77 00:17:46.304 Core 3: 21054.59 82.24 759.24 248.87 8236.19 00:17:46.304 ========================================================================== 00:17:46.304 Total : 41997.59 164.05 761.27 248.87 8506.77 00:17:46.304 00:17:46.304 Total operations: 210008, translate 210008 pull_push 0 memzero 0 00:17:46.304 17:42:24 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:17:46.304 17:42:24 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:17:46.304 17:42:24 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:17:46.304 [2024-10-17 17:42:24.456556] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:46.304 [2024-10-17 17:42:24.456615] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid664842 ] 00:17:46.304 [2024-10-17 17:42:24.526992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:46.304 [2024-10-17 17:42:24.569653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:46.304 [2024-10-17 17:42:24.569655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.576 bdev Malloc0 reports 2 memory domains 00:17:51.576 bdev Malloc0 doesn't support RDMA memory domain 00:17:51.576 Initialization complete, running randrw IO for 5 sec on 2 cores 00:17:51.576 ========================================================================== 00:17:51.576 Latency [us] 00:17:51.576 IOPS MiB/s Average min max 00:17:51.576 Core 2: 14087.28 55.03 1135.04 535.49 1470.53 00:17:51.576 Core 3: 14148.86 55.27 1130.11 509.16 2205.60 00:17:51.576 ========================================================================== 00:17:51.576 Total : 28236.15 110.30 1132.57 509.16 2205.60 00:17:51.576 00:17:51.576 Total operations: 141228, translate 0 pull_push 564912 memzero 0 00:17:51.576 17:42:29 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:17:51.576 17:42:29 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:17:51.576 17:42:29 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:17:51.576 17:42:29 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:17:51.576 Ignoring -M option 00:17:51.576 [2024-10-17 17:42:29.913332] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:51.576 [2024-10-17 17:42:29.913396] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid665475 ] 00:17:51.835 [2024-10-17 17:42:29.983879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:51.835 [2024-10-17 17:42:30.033142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.835 [2024-10-17 17:42:30.033145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.104 bdev 39334592-6589-4f00-bcb3-8aa177da9b1c reports 1 memory domains 00:17:57.104 bdev 39334592-6589-4f00-bcb3-8aa177da9b1c supports RDMA memory domain 00:17:57.104 Initialization complete, running randread IO for 5 sec on 2 cores 00:17:57.104 ========================================================================== 00:17:57.104 Latency [us] 00:17:57.104 IOPS MiB/s Average min max 00:17:57.104 Core 2: 63931.53 249.73 249.22 85.30 1435.59 00:17:57.104 Core 3: 64545.45 252.13 246.84 81.33 1501.95 00:17:57.104 ========================================================================== 00:17:57.104 Total : 128476.98 501.86 248.02 81.33 1501.95 00:17:57.104 00:17:57.104 Total operations: 642462, translate 0 pull_push 0 memzero 642462 00:17:57.362 17:42:35 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:17:57.363 [2024-10-17 17:42:35.604924] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:59.897 Initializing NVMe Controllers 00:17:59.897 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:17:59.897 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:17:59.897 Initialization complete. Launching workers. 00:17:59.897 ======================================================== 00:17:59.897 Latency(us) 00:17:59.897 Device Information : IOPS MiB/s Average min max 00:17:59.897 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7980.52 7953.21 7996.26 00:17:59.897 ======================================================== 00:17:59.897 Total : 2016.00 7.88 7980.52 7953.21 7996.26 00:17:59.897 00:17:59.897 17:42:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:17:59.897 17:42:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:17:59.897 17:42:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:17:59.897 17:42:37 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:17:59.897 [2024-10-17 17:42:37.947882] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:17:59.897 [2024-10-17 17:42:37.947940] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid666517 ] 00:17:59.897 [2024-10-17 17:42:38.016981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:59.897 [2024-10-17 17:42:38.062706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.897 [2024-10-17 17:42:38.062709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.168 bdev 8f6e0bb8-ea35-45b9-be37-f61085b85923 reports 1 memory domains 00:18:05.168 bdev 8f6e0bb8-ea35-45b9-be37-f61085b85923 supports RDMA memory domain 00:18:05.168 Initialization complete, running randrw IO for 5 sec on 2 cores 00:18:05.168 ========================================================================== 00:18:05.168 Latency [us] 00:18:05.168 IOPS MiB/s Average min max 00:18:05.168 Core 2: 18495.97 72.25 864.35 67.77 11401.70 00:18:05.168 Core 3: 18763.13 73.29 852.07 15.26 11039.79 00:18:05.168 ========================================================================== 00:18:05.168 Total : 37259.10 145.54 858.17 15.26 11401.70 00:18:05.168 00:18:05.168 Total operations: 186322, translate 186220 pull_push 0 memzero 102 00:18:05.168 17:42:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:18:05.168 17:42:43 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:18:05.168 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:05.168 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # sync 00:18:05.168 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:05.168 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:05.168 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set +e 00:18:05.168 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:05.168 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:05.168 rmmod nvme_rdma 00:18:05.168 rmmod nvme_fabrics 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@128 -- # set -e 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@129 -- # return 0 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@515 -- # '[' -n 663945 ']' 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@516 -- # killprocess 663945 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 663945 ']' 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 663945 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 663945 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 663945' 00:18:05.427 killing process with pid 663945 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 663945 00:18:05.427 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 663945 00:18:05.686 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:05.686 17:42:43 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:05.686 00:18:05.686 real 0m32.169s 00:18:05.686 user 1m35.444s 00:18:05.686 sys 0m6.019s 00:18:05.686 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:05.686 17:42:43 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:18:05.686 ************************************ 00:18:05.686 END TEST dma 00:18:05.686 ************************************ 00:18:05.686 17:42:43 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:18:05.686 17:42:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:05.686 17:42:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:05.686 17:42:43 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:05.686 ************************************ 00:18:05.686 START TEST nvmf_identify 00:18:05.686 ************************************ 00:18:05.686 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:18:05.945 * Looking for test storage... 00:18:05.945 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:05.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.945 --rc genhtml_branch_coverage=1 00:18:05.945 --rc genhtml_function_coverage=1 00:18:05.945 --rc genhtml_legend=1 00:18:05.945 --rc geninfo_all_blocks=1 00:18:05.945 --rc geninfo_unexecuted_blocks=1 00:18:05.945 00:18:05.945 ' 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:05.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.945 --rc genhtml_branch_coverage=1 00:18:05.945 --rc genhtml_function_coverage=1 00:18:05.945 --rc genhtml_legend=1 00:18:05.945 --rc geninfo_all_blocks=1 00:18:05.945 --rc geninfo_unexecuted_blocks=1 00:18:05.945 00:18:05.945 ' 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:05.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.945 --rc genhtml_branch_coverage=1 00:18:05.945 --rc genhtml_function_coverage=1 00:18:05.945 --rc genhtml_legend=1 00:18:05.945 --rc geninfo_all_blocks=1 00:18:05.945 --rc geninfo_unexecuted_blocks=1 00:18:05.945 00:18:05.945 ' 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:05.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.945 --rc genhtml_branch_coverage=1 00:18:05.945 --rc genhtml_function_coverage=1 00:18:05.945 --rc genhtml_legend=1 00:18:05.945 --rc geninfo_all_blocks=1 00:18:05.945 --rc geninfo_unexecuted_blocks=1 00:18:05.945 00:18:05.945 ' 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.945 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:05.946 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:18:05.946 17:42:44 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:18:12.508 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:18:12.508 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:12.508 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:12.509 Found net devices under 0000:18:00.0: mlx_0_0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:12.509 Found net devices under 0000:18:00.1: mlx_0_1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # rdma_device_init 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # uname 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:12.509 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:12.509 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:18:12.509 altname enp24s0f0np0 00:18:12.509 altname ens785f0np0 00:18:12.509 inet 192.168.100.8/24 scope global mlx_0_0 00:18:12.509 valid_lft forever preferred_lft forever 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:12.509 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:12.509 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:18:12.509 altname enp24s0f1np1 00:18:12.509 altname ens785f1np1 00:18:12.509 inet 192.168.100.9/24 scope global mlx_0_1 00:18:12.509 valid_lft forever preferred_lft forever 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@109 -- # continue 2 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:12.509 192.168.100.9' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:12.509 192.168.100.9' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # head -n 1 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:12.509 192.168.100.9' 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # tail -n +2 00:18:12.509 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # head -n 1 00:18:12.768 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:12.768 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:12.768 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:12.768 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:12.768 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:12.768 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:12.768 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:18:12.768 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:12.768 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=670293 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 670293 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 670293 ']' 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.769 17:42:50 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:12.769 [2024-10-17 17:42:50.971154] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:18:12.769 [2024-10-17 17:42:50.971217] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.769 [2024-10-17 17:42:51.048005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:12.769 [2024-10-17 17:42:51.098392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.769 [2024-10-17 17:42:51.098438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.769 [2024-10-17 17:42:51.098448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.769 [2024-10-17 17:42:51.098457] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.769 [2024-10-17 17:42:51.098464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.769 [2024-10-17 17:42:51.099844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.769 [2024-10-17 17:42:51.099866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.769 [2024-10-17 17:42:51.099932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:12.769 [2024-10-17 17:42:51.099934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.027 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.027 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:18:13.027 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:13.027 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.027 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.027 [2024-10-17 17:42:51.240434] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a692c0/0x1a6d7b0) succeed. 00:18:13.027 [2024-10-17 17:42:51.250880] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a6a950/0x1aaee50) succeed. 00:18:13.027 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.027 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:18:13.027 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:13.027 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.289 Malloc0 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.289 [2024-10-17 17:42:51.483661] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.289 [ 00:18:13.289 { 00:18:13.289 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:13.289 "subtype": "Discovery", 00:18:13.289 "listen_addresses": [ 00:18:13.289 { 00:18:13.289 "trtype": "RDMA", 00:18:13.289 "adrfam": "IPv4", 00:18:13.289 "traddr": "192.168.100.8", 00:18:13.289 "trsvcid": "4420" 00:18:13.289 } 00:18:13.289 ], 00:18:13.289 "allow_any_host": true, 00:18:13.289 "hosts": [] 00:18:13.289 }, 00:18:13.289 { 00:18:13.289 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.289 "subtype": "NVMe", 00:18:13.289 "listen_addresses": [ 00:18:13.289 { 00:18:13.289 "trtype": "RDMA", 00:18:13.289 "adrfam": "IPv4", 00:18:13.289 "traddr": "192.168.100.8", 00:18:13.289 "trsvcid": "4420" 00:18:13.289 } 00:18:13.289 ], 00:18:13.289 "allow_any_host": true, 00:18:13.289 "hosts": [], 00:18:13.289 "serial_number": "SPDK00000000000001", 00:18:13.289 "model_number": "SPDK bdev Controller", 00:18:13.289 "max_namespaces": 32, 00:18:13.289 "min_cntlid": 1, 00:18:13.289 "max_cntlid": 65519, 00:18:13.289 "namespaces": [ 00:18:13.289 { 00:18:13.289 "nsid": 1, 00:18:13.289 "bdev_name": "Malloc0", 00:18:13.289 "name": "Malloc0", 00:18:13.289 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:18:13.289 "eui64": "ABCDEF0123456789", 00:18:13.289 "uuid": "d24999d0-8b70-4eb9-9702-fcb703ec4fd7" 00:18:13.289 } 00:18:13.289 ] 00:18:13.289 } 00:18:13.289 ] 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.289 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:18:13.289 [2024-10-17 17:42:51.542475] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:18:13.289 [2024-10-17 17:42:51.542517] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670370 ] 00:18:13.289 [2024-10-17 17:42:51.587218] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:18:13.289 [2024-10-17 17:42:51.587298] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:18:13.289 [2024-10-17 17:42:51.587317] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:18:13.289 [2024-10-17 17:42:51.587322] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:18:13.289 [2024-10-17 17:42:51.587355] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:18:13.289 [2024-10-17 17:42:51.599599] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:18:13.289 [2024-10-17 17:42:51.611035] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:13.289 [2024-10-17 17:42:51.611044] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:18:13.289 [2024-10-17 17:42:51.611052] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x181e00 00:18:13.289 [2024-10-17 17:42:51.611061] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x181e00 00:18:13.289 [2024-10-17 17:42:51.611067] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611074] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611080] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611089] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611096] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611102] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611109] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611115] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611121] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611128] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611134] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611140] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611147] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611153] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611159] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611165] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611172] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611178] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611185] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611191] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611197] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611203] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611210] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611216] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611223] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611229] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611235] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611241] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611248] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611254] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:18:13.290 [2024-10-17 17:42:51.611260] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:13.290 [2024-10-17 17:42:51.611265] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:18:13.290 [2024-10-17 17:42:51.611283] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.611298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x181e00 00:18:13.290 [2024-10-17 17:42:51.615423] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.290 [2024-10-17 17:42:51.615436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:13.290 [2024-10-17 17:42:51.615444] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615452] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:13.290 [2024-10-17 17:42:51.615460] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:18:13.290 [2024-10-17 17:42:51.615467] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:18:13.290 [2024-10-17 17:42:51.615481] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.290 [2024-10-17 17:42:51.615514] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.290 [2024-10-17 17:42:51.615520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:18:13.290 [2024-10-17 17:42:51.615527] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:18:13.290 [2024-10-17 17:42:51.615533] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615541] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:18:13.290 [2024-10-17 17:42:51.615549] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.290 [2024-10-17 17:42:51.615573] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.290 [2024-10-17 17:42:51.615579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:18:13.290 [2024-10-17 17:42:51.615585] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:18:13.290 [2024-10-17 17:42:51.615592] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615599] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:18:13.290 [2024-10-17 17:42:51.615607] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.290 [2024-10-17 17:42:51.615634] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.290 [2024-10-17 17:42:51.615640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.290 [2024-10-17 17:42:51.615646] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:13.290 [2024-10-17 17:42:51.615653] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615661] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.290 [2024-10-17 17:42:51.615685] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.290 [2024-10-17 17:42:51.615690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.290 [2024-10-17 17:42:51.615699] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:18:13.290 [2024-10-17 17:42:51.615705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:18:13.290 [2024-10-17 17:42:51.615711] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:13.290 [2024-10-17 17:42:51.615824] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:18:13.290 [2024-10-17 17:42:51.615830] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:13.290 [2024-10-17 17:42:51.615840] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.290 [2024-10-17 17:42:51.615870] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.290 [2024-10-17 17:42:51.615876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.290 [2024-10-17 17:42:51.615882] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:13.290 [2024-10-17 17:42:51.615888] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615896] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.290 [2024-10-17 17:42:51.615924] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.290 [2024-10-17 17:42:51.615930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:13.290 [2024-10-17 17:42:51.615936] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:13.290 [2024-10-17 17:42:51.615942] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:18:13.290 [2024-10-17 17:42:51.615948] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615955] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:18:13.290 [2024-10-17 17:42:51.615964] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:18:13.290 [2024-10-17 17:42:51.615974] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.290 [2024-10-17 17:42:51.615982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181e00 00:18:13.290 [2024-10-17 17:42:51.616011] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.290 [2024-10-17 17:42:51.616017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.290 [2024-10-17 17:42:51.616026] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:18:13.290 [2024-10-17 17:42:51.616034] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:18:13.290 [2024-10-17 17:42:51.616039] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:18:13.290 [2024-10-17 17:42:51.616046] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:18:13.290 [2024-10-17 17:42:51.616052] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:18:13.291 [2024-10-17 17:42:51.616057] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:18:13.291 [2024-10-17 17:42:51.616063] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616071] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:18:13.291 [2024-10-17 17:42:51.616081] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.291 [2024-10-17 17:42:51.616109] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.291 [2024-10-17 17:42:51.616115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.291 [2024-10-17 17:42:51.616124] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.291 [2024-10-17 17:42:51.616139] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.291 [2024-10-17 17:42:51.616153] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.291 [2024-10-17 17:42:51.616167] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.291 [2024-10-17 17:42:51.616180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:18:13.291 [2024-10-17 17:42:51.616186] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616197] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:13.291 [2024-10-17 17:42:51.616204] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.291 [2024-10-17 17:42:51.616228] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.291 [2024-10-17 17:42:51.616234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:18:13.291 [2024-10-17 17:42:51.616241] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:18:13.291 [2024-10-17 17:42:51.616249] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:18:13.291 [2024-10-17 17:42:51.616255] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616264] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181e00 00:18:13.291 [2024-10-17 17:42:51.616297] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.291 [2024-10-17 17:42:51.616302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:13.291 [2024-10-17 17:42:51.616310] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616319] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:18:13.291 [2024-10-17 17:42:51.616345] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x181e00 00:18:13.291 [2024-10-17 17:42:51.616362] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.291 [2024-10-17 17:42:51.616389] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.291 [2024-10-17 17:42:51.616395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.291 [2024-10-17 17:42:51.616407] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x181e00 00:18:13.291 [2024-10-17 17:42:51.616427] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616434] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.291 [2024-10-17 17:42:51.616439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.291 [2024-10-17 17:42:51.616445] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616452] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.291 [2024-10-17 17:42:51.616457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.291 [2024-10-17 17:42:51.616467] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x181e00 00:18:13.291 [2024-10-17 17:42:51.616481] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x181e00 00:18:13.291 [2024-10-17 17:42:51.616500] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.291 [2024-10-17 17:42:51.616506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.291 [2024-10-17 17:42:51.616518] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x181e00 00:18:13.291 ===================================================== 00:18:13.291 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:18:13.291 ===================================================== 00:18:13.291 Controller Capabilities/Features 00:18:13.291 ================================ 00:18:13.291 Vendor ID: 0000 00:18:13.291 Subsystem Vendor ID: 0000 00:18:13.291 Serial Number: .................... 00:18:13.291 Model Number: ........................................ 00:18:13.291 Firmware Version: 25.01 00:18:13.291 Recommended Arb Burst: 0 00:18:13.291 IEEE OUI Identifier: 00 00 00 00:18:13.291 Multi-path I/O 00:18:13.291 May have multiple subsystem ports: No 00:18:13.291 May have multiple controllers: No 00:18:13.291 Associated with SR-IOV VF: No 00:18:13.291 Max Data Transfer Size: 131072 00:18:13.291 Max Number of Namespaces: 0 00:18:13.291 Max Number of I/O Queues: 1024 00:18:13.291 NVMe Specification Version (VS): 1.3 00:18:13.291 NVMe Specification Version (Identify): 1.3 00:18:13.291 Maximum Queue Entries: 128 00:18:13.291 Contiguous Queues Required: Yes 00:18:13.291 Arbitration Mechanisms Supported 00:18:13.291 Weighted Round Robin: Not Supported 00:18:13.291 Vendor Specific: Not Supported 00:18:13.291 Reset Timeout: 15000 ms 00:18:13.291 Doorbell Stride: 4 bytes 00:18:13.291 NVM Subsystem Reset: Not Supported 00:18:13.291 Command Sets Supported 00:18:13.291 NVM Command Set: Supported 00:18:13.291 Boot Partition: Not Supported 00:18:13.291 Memory Page Size Minimum: 4096 bytes 00:18:13.291 Memory Page Size Maximum: 4096 bytes 00:18:13.291 Persistent Memory Region: Not Supported 00:18:13.291 Optional Asynchronous Events Supported 00:18:13.291 Namespace Attribute Notices: Not Supported 00:18:13.291 Firmware Activation Notices: Not Supported 00:18:13.291 ANA Change Notices: Not Supported 00:18:13.291 PLE Aggregate Log Change Notices: Not Supported 00:18:13.291 LBA Status Info Alert Notices: Not Supported 00:18:13.291 EGE Aggregate Log Change Notices: Not Supported 00:18:13.291 Normal NVM Subsystem Shutdown event: Not Supported 00:18:13.291 Zone Descriptor Change Notices: Not Supported 00:18:13.291 Discovery Log Change Notices: Supported 00:18:13.291 Controller Attributes 00:18:13.291 128-bit Host Identifier: Not Supported 00:18:13.291 Non-Operational Permissive Mode: Not Supported 00:18:13.291 NVM Sets: Not Supported 00:18:13.291 Read Recovery Levels: Not Supported 00:18:13.292 Endurance Groups: Not Supported 00:18:13.292 Predictable Latency Mode: Not Supported 00:18:13.292 Traffic Based Keep ALive: Not Supported 00:18:13.292 Namespace Granularity: Not Supported 00:18:13.292 SQ Associations: Not Supported 00:18:13.292 UUID List: Not Supported 00:18:13.292 Multi-Domain Subsystem: Not Supported 00:18:13.292 Fixed Capacity Management: Not Supported 00:18:13.292 Variable Capacity Management: Not Supported 00:18:13.292 Delete Endurance Group: Not Supported 00:18:13.292 Delete NVM Set: Not Supported 00:18:13.292 Extended LBA Formats Supported: Not Supported 00:18:13.292 Flexible Data Placement Supported: Not Supported 00:18:13.292 00:18:13.292 Controller Memory Buffer Support 00:18:13.292 ================================ 00:18:13.292 Supported: No 00:18:13.292 00:18:13.292 Persistent Memory Region Support 00:18:13.292 ================================ 00:18:13.292 Supported: No 00:18:13.292 00:18:13.292 Admin Command Set Attributes 00:18:13.292 ============================ 00:18:13.292 Security Send/Receive: Not Supported 00:18:13.292 Format NVM: Not Supported 00:18:13.292 Firmware Activate/Download: Not Supported 00:18:13.292 Namespace Management: Not Supported 00:18:13.292 Device Self-Test: Not Supported 00:18:13.292 Directives: Not Supported 00:18:13.292 NVMe-MI: Not Supported 00:18:13.292 Virtualization Management: Not Supported 00:18:13.292 Doorbell Buffer Config: Not Supported 00:18:13.292 Get LBA Status Capability: Not Supported 00:18:13.292 Command & Feature Lockdown Capability: Not Supported 00:18:13.292 Abort Command Limit: 1 00:18:13.292 Async Event Request Limit: 4 00:18:13.292 Number of Firmware Slots: N/A 00:18:13.292 Firmware Slot 1 Read-Only: N/A 00:18:13.292 Firmware Activation Without Reset: N/A 00:18:13.292 Multiple Update Detection Support: N/A 00:18:13.292 Firmware Update Granularity: No Information Provided 00:18:13.292 Per-Namespace SMART Log: No 00:18:13.292 Asymmetric Namespace Access Log Page: Not Supported 00:18:13.292 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:18:13.292 Command Effects Log Page: Not Supported 00:18:13.292 Get Log Page Extended Data: Supported 00:18:13.292 Telemetry Log Pages: Not Supported 00:18:13.292 Persistent Event Log Pages: Not Supported 00:18:13.292 Supported Log Pages Log Page: May Support 00:18:13.292 Commands Supported & Effects Log Page: Not Supported 00:18:13.292 Feature Identifiers & Effects Log Page:May Support 00:18:13.292 NVMe-MI Commands & Effects Log Page: May Support 00:18:13.292 Data Area 4 for Telemetry Log: Not Supported 00:18:13.292 Error Log Page Entries Supported: 128 00:18:13.292 Keep Alive: Not Supported 00:18:13.292 00:18:13.292 NVM Command Set Attributes 00:18:13.292 ========================== 00:18:13.292 Submission Queue Entry Size 00:18:13.292 Max: 1 00:18:13.292 Min: 1 00:18:13.292 Completion Queue Entry Size 00:18:13.292 Max: 1 00:18:13.292 Min: 1 00:18:13.292 Number of Namespaces: 0 00:18:13.292 Compare Command: Not Supported 00:18:13.292 Write Uncorrectable Command: Not Supported 00:18:13.292 Dataset Management Command: Not Supported 00:18:13.292 Write Zeroes Command: Not Supported 00:18:13.292 Set Features Save Field: Not Supported 00:18:13.292 Reservations: Not Supported 00:18:13.292 Timestamp: Not Supported 00:18:13.292 Copy: Not Supported 00:18:13.292 Volatile Write Cache: Not Present 00:18:13.292 Atomic Write Unit (Normal): 1 00:18:13.292 Atomic Write Unit (PFail): 1 00:18:13.292 Atomic Compare & Write Unit: 1 00:18:13.292 Fused Compare & Write: Supported 00:18:13.292 Scatter-Gather List 00:18:13.292 SGL Command Set: Supported 00:18:13.292 SGL Keyed: Supported 00:18:13.292 SGL Bit Bucket Descriptor: Not Supported 00:18:13.292 SGL Metadata Pointer: Not Supported 00:18:13.292 Oversized SGL: Not Supported 00:18:13.292 SGL Metadata Address: Not Supported 00:18:13.292 SGL Offset: Supported 00:18:13.292 Transport SGL Data Block: Not Supported 00:18:13.292 Replay Protected Memory Block: Not Supported 00:18:13.292 00:18:13.292 Firmware Slot Information 00:18:13.292 ========================= 00:18:13.292 Active slot: 0 00:18:13.292 00:18:13.292 00:18:13.292 Error Log 00:18:13.292 ========= 00:18:13.292 00:18:13.292 Active Namespaces 00:18:13.292 ================= 00:18:13.292 Discovery Log Page 00:18:13.292 ================== 00:18:13.292 Generation Counter: 2 00:18:13.292 Number of Records: 2 00:18:13.292 Record Format: 0 00:18:13.292 00:18:13.292 Discovery Log Entry 0 00:18:13.292 ---------------------- 00:18:13.292 Transport Type: 1 (RDMA) 00:18:13.292 Address Family: 1 (IPv4) 00:18:13.292 Subsystem Type: 3 (Current Discovery Subsystem) 00:18:13.292 Entry Flags: 00:18:13.292 Duplicate Returned Information: 1 00:18:13.292 Explicit Persistent Connection Support for Discovery: 1 00:18:13.292 Transport Requirements: 00:18:13.292 Secure Channel: Not Required 00:18:13.292 Port ID: 0 (0x0000) 00:18:13.292 Controller ID: 65535 (0xffff) 00:18:13.292 Admin Max SQ Size: 128 00:18:13.292 Transport Service Identifier: 4420 00:18:13.292 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:18:13.292 Transport Address: 192.168.100.8 00:18:13.292 Transport Specific Address Subtype - RDMA 00:18:13.292 RDMA QP Service Type: 1 (Reliable Connected) 00:18:13.292 RDMA Provider Type: 1 (No provider specified) 00:18:13.292 RDMA CM Service: 1 (RDMA_CM) 00:18:13.292 Discovery Log Entry 1 00:18:13.292 ---------------------- 00:18:13.292 Transport Type: 1 (RDMA) 00:18:13.292 Address Family: 1 (IPv4) 00:18:13.292 Subsystem Type: 2 (NVM Subsystem) 00:18:13.292 Entry Flags: 00:18:13.292 Duplicate Returned Information: 0 00:18:13.292 Explicit Persistent Connection Support for Discovery: 0 00:18:13.292 Transport Requirements: 00:18:13.292 Secure Channel: Not Required 00:18:13.292 Port ID: 0 (0x0000) 00:18:13.292 Controller ID: 65535 (0xffff) 00:18:13.292 Admin Max SQ Size: [2024-10-17 17:42:51.616593] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:18:13.292 [2024-10-17 17:42:51.616604] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59247 doesn't match qid 00:18:13.292 [2024-10-17 17:42:51.616619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:5 sqhd:c990 p:0 m:0 dnr:0 00:18:13.292 [2024-10-17 17:42:51.616625] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59247 doesn't match qid 00:18:13.292 [2024-10-17 17:42:51.616634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:5 sqhd:c990 p:0 m:0 dnr:0 00:18:13.292 [2024-10-17 17:42:51.616640] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59247 doesn't match qid 00:18:13.292 [2024-10-17 17:42:51.616648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:5 sqhd:c990 p:0 m:0 dnr:0 00:18:13.292 [2024-10-17 17:42:51.616655] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59247 doesn't match qid 00:18:13.292 [2024-10-17 17:42:51.616662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32715 cdw0:5 sqhd:c990 p:0 m:0 dnr:0 00:18:13.292 [2024-10-17 17:42:51.616671] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x181e00 00:18:13.292 [2024-10-17 17:42:51.616679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.292 [2024-10-17 17:42:51.616701] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.292 [2024-10-17 17:42:51.616707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:18:13.292 [2024-10-17 17:42:51.616715] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.292 [2024-10-17 17:42:51.616723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.292 [2024-10-17 17:42:51.616729] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x181e00 00:18:13.292 [2024-10-17 17:42:51.616744] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.292 [2024-10-17 17:42:51.616750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.292 [2024-10-17 17:42:51.616757] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:18:13.293 [2024-10-17 17:42:51.616763] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:18:13.293 [2024-10-17 17:42:51.616769] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616778] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.616804] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.616810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.616817] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616826] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.616853] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.616860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.616867] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616876] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.616900] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.616906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.616912] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616921] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.616948] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.616954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.616960] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616969] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.616978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.616998] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617011] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617020] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617046] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617059] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617068] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617098] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617110] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617119] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617143] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617155] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617164] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617187] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617199] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617208] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617236] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617248] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617257] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617283] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617295] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617304] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617326] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617338] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617347] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617375] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617388] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617397] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617431] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617444] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617453] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617481] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617493] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617502] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.293 [2024-10-17 17:42:51.617510] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.293 [2024-10-17 17:42:51.617528] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.293 [2024-10-17 17:42:51.617534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:13.293 [2024-10-17 17:42:51.617540] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617549] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.617571] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.617577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.617583] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617592] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.617618] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.617623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.617630] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617639] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.617668] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.617674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.617681] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617690] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.617719] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.617725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.617731] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617740] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.617768] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.617774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.617780] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617789] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.617813] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.617819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.617825] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617834] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.617862] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.617868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.617874] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617883] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.617907] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.617913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.617919] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617928] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.617954] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.617960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.617966] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617975] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.617991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.618009] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.618015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.618021] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618030] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.618064] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.618070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.618076] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618085] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.618111] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.618117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.618123] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618132] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.618164] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.618169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.618176] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618185] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.294 [2024-10-17 17:42:51.618209] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.294 [2024-10-17 17:42:51.618214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:13.294 [2024-10-17 17:42:51.618221] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618230] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.294 [2024-10-17 17:42:51.618237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618255] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618267] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618276] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618300] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618312] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618321] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618349] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618361] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618370] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618377] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618397] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618409] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618422] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618446] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618458] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618467] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618495] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618507] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618516] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618544] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618556] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618568] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618592] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618604] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618613] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618637] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618649] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618658] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618682] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618694] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618703] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618729] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618741] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618750] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618779] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618792] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618801] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618828] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618840] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618851] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618879] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618891] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618900] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618922] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618934] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618943] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.618967] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.618972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.618979] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618988] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.618995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.619015] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.619021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.619028] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.619036] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.295 [2024-10-17 17:42:51.619044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.295 [2024-10-17 17:42:51.619062] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.295 [2024-10-17 17:42:51.619068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:13.295 [2024-10-17 17:42:51.619074] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619083] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.296 [2024-10-17 17:42:51.619111] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.296 [2024-10-17 17:42:51.619117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:13.296 [2024-10-17 17:42:51.619124] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619133] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.296 [2024-10-17 17:42:51.619161] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.296 [2024-10-17 17:42:51.619167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:13.296 [2024-10-17 17:42:51.619173] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619182] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.296 [2024-10-17 17:42:51.619204] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.296 [2024-10-17 17:42:51.619210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:13.296 [2024-10-17 17:42:51.619216] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619225] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.296 [2024-10-17 17:42:51.619251] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.296 [2024-10-17 17:42:51.619257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:13.296 [2024-10-17 17:42:51.619263] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619272] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.296 [2024-10-17 17:42:51.619304] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.296 [2024-10-17 17:42:51.619310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:13.296 [2024-10-17 17:42:51.619316] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619325] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.296 [2024-10-17 17:42:51.619357] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.296 [2024-10-17 17:42:51.619362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:13.296 [2024-10-17 17:42:51.619369] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619378] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.619386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.296 [2024-10-17 17:42:51.619404] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.296 [2024-10-17 17:42:51.619409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:13.296 [2024-10-17 17:42:51.623422] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.623433] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.623441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.296 [2024-10-17 17:42:51.623460] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.296 [2024-10-17 17:42:51.623466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:18:13.296 [2024-10-17 17:42:51.623472] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x181e00 00:18:13.296 [2024-10-17 17:42:51.623479] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:18:13.296 128 00:18:13.296 Transport Service Identifier: 4420 00:18:13.296 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:18:13.296 Transport Address: 192.168.100.8 00:18:13.296 Transport Specific Address Subtype - RDMA 00:18:13.296 RDMA QP Service Type: 1 (Reliable Connected) 00:18:13.296 RDMA Provider Type: 1 (No provider specified) 00:18:13.296 RDMA CM Service: 1 (RDMA_CM) 00:18:13.296 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:18:13.558 [2024-10-17 17:42:51.697258] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:18:13.558 [2024-10-17 17:42:51.697309] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid670372 ] 00:18:13.558 [2024-10-17 17:42:51.741869] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:18:13.558 [2024-10-17 17:42:51.741942] nvme_rdma.c:2214:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:18:13.558 [2024-10-17 17:42:51.741961] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:18:13.558 [2024-10-17 17:42:51.741966] nvme_rdma.c:1219:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:18:13.558 [2024-10-17 17:42:51.741991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:18:13.558 [2024-10-17 17:42:51.750757] nvme_rdma.c: 431:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:18:13.558 [2024-10-17 17:42:51.765726] nvme_rdma.c:1101:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:13.558 [2024-10-17 17:42:51.765738] nvme_rdma.c:1106:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:18:13.558 [2024-10-17 17:42:51.765746] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765754] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765760] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765766] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765772] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765784] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765790] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765797] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765803] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765809] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765815] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765821] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x181e00 00:18:13.558 [2024-10-17 17:42:51.765827] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765833] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765840] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765846] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765852] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765858] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765864] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765870] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765876] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765882] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765889] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765895] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765901] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765907] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765913] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765919] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765925] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765932] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765938] nvme_rdma.c: 889:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765943] nvme_rdma.c:1120:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:18:13.559 [2024-10-17 17:42:51.765949] nvme_rdma.c:1123:nvme_rdma_connect_established: *DEBUG*: rc =0 00:18:13.559 [2024-10-17 17:42:51.765954] nvme_rdma.c:1128:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:18:13.559 [2024-10-17 17:42:51.765970] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.765984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf180 len:0x400 key:0x181e00 00:18:13.559 [2024-10-17 17:42:51.771422] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.559 [2024-10-17 17:42:51.771434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:13.559 [2024-10-17 17:42:51.771442] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771450] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:18:13.559 [2024-10-17 17:42:51.771457] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:18:13.559 [2024-10-17 17:42:51.771463] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:18:13.559 [2024-10-17 17:42:51.771475] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.559 [2024-10-17 17:42:51.771505] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.559 [2024-10-17 17:42:51.771511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:18:13.559 [2024-10-17 17:42:51.771518] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:18:13.559 [2024-10-17 17:42:51.771524] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771531] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:18:13.559 [2024-10-17 17:42:51.771539] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.559 [2024-10-17 17:42:51.771561] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.559 [2024-10-17 17:42:51.771566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:18:13.559 [2024-10-17 17:42:51.771573] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:18:13.559 [2024-10-17 17:42:51.771579] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:18:13.559 [2024-10-17 17:42:51.771594] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.559 [2024-10-17 17:42:51.771622] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.559 [2024-10-17 17:42:51.771628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:13.559 [2024-10-17 17:42:51.771634] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:13.559 [2024-10-17 17:42:51.771640] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771649] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.559 [2024-10-17 17:42:51.771675] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.559 [2024-10-17 17:42:51.771680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:13.559 [2024-10-17 17:42:51.771688] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:18:13.559 [2024-10-17 17:42:51.771694] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:18:13.559 [2024-10-17 17:42:51.771700] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771707] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:13.559 [2024-10-17 17:42:51.771814] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:18:13.559 [2024-10-17 17:42:51.771819] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:13.559 [2024-10-17 17:42:51.771828] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.559 [2024-10-17 17:42:51.771858] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.559 [2024-10-17 17:42:51.771863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:13.559 [2024-10-17 17:42:51.771870] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:13.559 [2024-10-17 17:42:51.771876] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771884] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.559 [2024-10-17 17:42:51.771910] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.559 [2024-10-17 17:42:51.771915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:13.559 [2024-10-17 17:42:51.771922] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:13.559 [2024-10-17 17:42:51.771927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:18:13.559 [2024-10-17 17:42:51.771933] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771940] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:18:13.559 [2024-10-17 17:42:51.771953] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:18:13.559 [2024-10-17 17:42:51.771962] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.771971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181e00 00:18:13.559 [2024-10-17 17:42:51.772013] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.559 [2024-10-17 17:42:51.772019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:13.559 [2024-10-17 17:42:51.772028] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:18:13.559 [2024-10-17 17:42:51.772034] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:18:13.559 [2024-10-17 17:42:51.772041] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:18:13.559 [2024-10-17 17:42:51.772046] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:18:13.559 [2024-10-17 17:42:51.772052] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:18:13.559 [2024-10-17 17:42:51.772058] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:18:13.559 [2024-10-17 17:42:51.772064] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.772071] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:18:13.559 [2024-10-17 17:42:51.772081] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.772089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.559 [2024-10-17 17:42:51.772110] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.559 [2024-10-17 17:42:51.772115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:13.559 [2024-10-17 17:42:51.772124] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0480 length 0x40 lkey 0x181e00 00:18:13.559 [2024-10-17 17:42:51.772131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.560 [2024-10-17 17:42:51.772138] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d05c0 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.560 [2024-10-17 17:42:51.772153] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.560 [2024-10-17 17:42:51.772167] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.560 [2024-10-17 17:42:51.772180] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772186] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772196] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772204] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.560 [2024-10-17 17:42:51.772228] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772240] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:18:13.560 [2024-10-17 17:42:51.772247] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772254] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772264] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772271] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772278] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.560 [2024-10-17 17:42:51.772304] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772363] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772369] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772377] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772386] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x181e00 00:18:13.560 [2024-10-17 17:42:51.772420] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772441] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:18:13.560 [2024-10-17 17:42:51.772451] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772457] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772465] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772473] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181e00 00:18:13.560 [2024-10-17 17:42:51.772515] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772536] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772542] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772550] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772559] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x181e00 00:18:13.560 [2024-10-17 17:42:51.772588] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772609] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772616] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772625] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772632] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772638] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772644] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772651] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:18:13.560 [2024-10-17 17:42:51.772656] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:18:13.560 [2024-10-17 17:42:51.772663] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:18:13.560 [2024-10-17 17:42:51.772678] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.560 [2024-10-17 17:42:51.772693] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:18:13.560 [2024-10-17 17:42:51.772711] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772724] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772730] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772741] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772751] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772758] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.560 [2024-10-17 17:42:51.772779] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772791] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772800] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772809] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.560 [2024-10-17 17:42:51.772830] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772842] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772851] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.560 [2024-10-17 17:42:51.772882] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.560 [2024-10-17 17:42:51.772888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:18:13.560 [2024-10-17 17:42:51.772894] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772908] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0980 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x181e00 00:18:13.560 [2024-10-17 17:42:51.772925] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0340 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x181e00 00:18:13.560 [2024-10-17 17:42:51.772941] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0ac0 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x181e00 00:18:13.560 [2024-10-17 17:42:51.772957] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x181e00 00:18:13.560 [2024-10-17 17:42:51.772965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x181e00 00:18:13.560 [2024-10-17 17:42:51.772973] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.561 [2024-10-17 17:42:51.772979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:13.561 [2024-10-17 17:42:51.772990] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x181e00 00:18:13.561 [2024-10-17 17:42:51.772997] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.561 [2024-10-17 17:42:51.773002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:13.561 [2024-10-17 17:42:51.773013] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x181e00 00:18:13.561 [2024-10-17 17:42:51.773019] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.561 [2024-10-17 17:42:51.773025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:13.561 [2024-10-17 17:42:51.773032] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x181e00 00:18:13.561 [2024-10-17 17:42:51.773038] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.561 [2024-10-17 17:42:51.773045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:13.561 [2024-10-17 17:42:51.773054] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x181e00 00:18:13.561 ===================================================== 00:18:13.561 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:13.561 ===================================================== 00:18:13.561 Controller Capabilities/Features 00:18:13.561 ================================ 00:18:13.561 Vendor ID: 8086 00:18:13.561 Subsystem Vendor ID: 8086 00:18:13.561 Serial Number: SPDK00000000000001 00:18:13.561 Model Number: SPDK bdev Controller 00:18:13.561 Firmware Version: 25.01 00:18:13.561 Recommended Arb Burst: 6 00:18:13.561 IEEE OUI Identifier: e4 d2 5c 00:18:13.561 Multi-path I/O 00:18:13.561 May have multiple subsystem ports: Yes 00:18:13.561 May have multiple controllers: Yes 00:18:13.561 Associated with SR-IOV VF: No 00:18:13.561 Max Data Transfer Size: 131072 00:18:13.561 Max Number of Namespaces: 32 00:18:13.561 Max Number of I/O Queues: 127 00:18:13.561 NVMe Specification Version (VS): 1.3 00:18:13.561 NVMe Specification Version (Identify): 1.3 00:18:13.561 Maximum Queue Entries: 128 00:18:13.561 Contiguous Queues Required: Yes 00:18:13.561 Arbitration Mechanisms Supported 00:18:13.561 Weighted Round Robin: Not Supported 00:18:13.561 Vendor Specific: Not Supported 00:18:13.561 Reset Timeout: 15000 ms 00:18:13.561 Doorbell Stride: 4 bytes 00:18:13.561 NVM Subsystem Reset: Not Supported 00:18:13.561 Command Sets Supported 00:18:13.561 NVM Command Set: Supported 00:18:13.561 Boot Partition: Not Supported 00:18:13.561 Memory Page Size Minimum: 4096 bytes 00:18:13.561 Memory Page Size Maximum: 4096 bytes 00:18:13.561 Persistent Memory Region: Not Supported 00:18:13.561 Optional Asynchronous Events Supported 00:18:13.561 Namespace Attribute Notices: Supported 00:18:13.561 Firmware Activation Notices: Not Supported 00:18:13.561 ANA Change Notices: Not Supported 00:18:13.561 PLE Aggregate Log Change Notices: Not Supported 00:18:13.561 LBA Status Info Alert Notices: Not Supported 00:18:13.561 EGE Aggregate Log Change Notices: Not Supported 00:18:13.561 Normal NVM Subsystem Shutdown event: Not Supported 00:18:13.561 Zone Descriptor Change Notices: Not Supported 00:18:13.561 Discovery Log Change Notices: Not Supported 00:18:13.561 Controller Attributes 00:18:13.561 128-bit Host Identifier: Supported 00:18:13.561 Non-Operational Permissive Mode: Not Supported 00:18:13.561 NVM Sets: Not Supported 00:18:13.561 Read Recovery Levels: Not Supported 00:18:13.561 Endurance Groups: Not Supported 00:18:13.561 Predictable Latency Mode: Not Supported 00:18:13.561 Traffic Based Keep ALive: Not Supported 00:18:13.561 Namespace Granularity: Not Supported 00:18:13.561 SQ Associations: Not Supported 00:18:13.561 UUID List: Not Supported 00:18:13.561 Multi-Domain Subsystem: Not Supported 00:18:13.561 Fixed Capacity Management: Not Supported 00:18:13.561 Variable Capacity Management: Not Supported 00:18:13.561 Delete Endurance Group: Not Supported 00:18:13.561 Delete NVM Set: Not Supported 00:18:13.561 Extended LBA Formats Supported: Not Supported 00:18:13.561 Flexible Data Placement Supported: Not Supported 00:18:13.561 00:18:13.561 Controller Memory Buffer Support 00:18:13.561 ================================ 00:18:13.561 Supported: No 00:18:13.561 00:18:13.561 Persistent Memory Region Support 00:18:13.561 ================================ 00:18:13.561 Supported: No 00:18:13.561 00:18:13.561 Admin Command Set Attributes 00:18:13.561 ============================ 00:18:13.561 Security Send/Receive: Not Supported 00:18:13.561 Format NVM: Not Supported 00:18:13.561 Firmware Activate/Download: Not Supported 00:18:13.561 Namespace Management: Not Supported 00:18:13.561 Device Self-Test: Not Supported 00:18:13.561 Directives: Not Supported 00:18:13.561 NVMe-MI: Not Supported 00:18:13.561 Virtualization Management: Not Supported 00:18:13.561 Doorbell Buffer Config: Not Supported 00:18:13.561 Get LBA Status Capability: Not Supported 00:18:13.561 Command & Feature Lockdown Capability: Not Supported 00:18:13.561 Abort Command Limit: 4 00:18:13.561 Async Event Request Limit: 4 00:18:13.561 Number of Firmware Slots: N/A 00:18:13.561 Firmware Slot 1 Read-Only: N/A 00:18:13.561 Firmware Activation Without Reset: N/A 00:18:13.561 Multiple Update Detection Support: N/A 00:18:13.561 Firmware Update Granularity: No Information Provided 00:18:13.561 Per-Namespace SMART Log: No 00:18:13.561 Asymmetric Namespace Access Log Page: Not Supported 00:18:13.561 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:18:13.561 Command Effects Log Page: Supported 00:18:13.561 Get Log Page Extended Data: Supported 00:18:13.561 Telemetry Log Pages: Not Supported 00:18:13.561 Persistent Event Log Pages: Not Supported 00:18:13.561 Supported Log Pages Log Page: May Support 00:18:13.561 Commands Supported & Effects Log Page: Not Supported 00:18:13.561 Feature Identifiers & Effects Log Page:May Support 00:18:13.561 NVMe-MI Commands & Effects Log Page: May Support 00:18:13.561 Data Area 4 for Telemetry Log: Not Supported 00:18:13.561 Error Log Page Entries Supported: 128 00:18:13.561 Keep Alive: Supported 00:18:13.561 Keep Alive Granularity: 10000 ms 00:18:13.561 00:18:13.561 NVM Command Set Attributes 00:18:13.561 ========================== 00:18:13.561 Submission Queue Entry Size 00:18:13.561 Max: 64 00:18:13.561 Min: 64 00:18:13.561 Completion Queue Entry Size 00:18:13.561 Max: 16 00:18:13.561 Min: 16 00:18:13.561 Number of Namespaces: 32 00:18:13.561 Compare Command: Supported 00:18:13.561 Write Uncorrectable Command: Not Supported 00:18:13.561 Dataset Management Command: Supported 00:18:13.561 Write Zeroes Command: Supported 00:18:13.561 Set Features Save Field: Not Supported 00:18:13.561 Reservations: Supported 00:18:13.561 Timestamp: Not Supported 00:18:13.561 Copy: Supported 00:18:13.561 Volatile Write Cache: Present 00:18:13.561 Atomic Write Unit (Normal): 1 00:18:13.561 Atomic Write Unit (PFail): 1 00:18:13.561 Atomic Compare & Write Unit: 1 00:18:13.561 Fused Compare & Write: Supported 00:18:13.561 Scatter-Gather List 00:18:13.561 SGL Command Set: Supported 00:18:13.561 SGL Keyed: Supported 00:18:13.561 SGL Bit Bucket Descriptor: Not Supported 00:18:13.561 SGL Metadata Pointer: Not Supported 00:18:13.561 Oversized SGL: Not Supported 00:18:13.561 SGL Metadata Address: Not Supported 00:18:13.561 SGL Offset: Supported 00:18:13.561 Transport SGL Data Block: Not Supported 00:18:13.561 Replay Protected Memory Block: Not Supported 00:18:13.561 00:18:13.561 Firmware Slot Information 00:18:13.561 ========================= 00:18:13.561 Active slot: 1 00:18:13.561 Slot 1 Firmware Revision: 25.01 00:18:13.561 00:18:13.561 00:18:13.561 Commands Supported and Effects 00:18:13.561 ============================== 00:18:13.561 Admin Commands 00:18:13.561 -------------- 00:18:13.561 Get Log Page (02h): Supported 00:18:13.561 Identify (06h): Supported 00:18:13.561 Abort (08h): Supported 00:18:13.561 Set Features (09h): Supported 00:18:13.561 Get Features (0Ah): Supported 00:18:13.561 Asynchronous Event Request (0Ch): Supported 00:18:13.561 Keep Alive (18h): Supported 00:18:13.561 I/O Commands 00:18:13.561 ------------ 00:18:13.561 Flush (00h): Supported LBA-Change 00:18:13.561 Write (01h): Supported LBA-Change 00:18:13.561 Read (02h): Supported 00:18:13.561 Compare (05h): Supported 00:18:13.561 Write Zeroes (08h): Supported LBA-Change 00:18:13.561 Dataset Management (09h): Supported LBA-Change 00:18:13.561 Copy (19h): Supported LBA-Change 00:18:13.561 00:18:13.561 Error Log 00:18:13.561 ========= 00:18:13.561 00:18:13.561 Arbitration 00:18:13.561 =========== 00:18:13.561 Arbitration Burst: 1 00:18:13.561 00:18:13.561 Power Management 00:18:13.561 ================ 00:18:13.561 Number of Power States: 1 00:18:13.561 Current Power State: Power State #0 00:18:13.561 Power State #0: 00:18:13.561 Max Power: 0.00 W 00:18:13.561 Non-Operational State: Operational 00:18:13.561 Entry Latency: Not Reported 00:18:13.561 Exit Latency: Not Reported 00:18:13.561 Relative Read Throughput: 0 00:18:13.562 Relative Read Latency: 0 00:18:13.562 Relative Write Throughput: 0 00:18:13.562 Relative Write Latency: 0 00:18:13.562 Idle Power: Not Reported 00:18:13.562 Active Power: Not Reported 00:18:13.562 Non-Operational Permissive Mode: Not Supported 00:18:13.562 00:18:13.562 Health Information 00:18:13.562 ================== 00:18:13.562 Critical Warnings: 00:18:13.562 Available Spare Space: OK 00:18:13.562 Temperature: OK 00:18:13.562 Device Reliability: OK 00:18:13.562 Read Only: No 00:18:13.562 Volatile Memory Backup: OK 00:18:13.562 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:13.562 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:13.562 Available Spare: 0% 00:18:13.562 Available Spare Threshold: 0% 00:18:13.562 Life Percentage [2024-10-17 17:42:51.773139] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c00 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773165] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773177] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773206] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:18:13.562 [2024-10-17 17:42:51.773216] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45428 doesn't match qid 00:18:13.562 [2024-10-17 17:42:51.773230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:5 sqhd:5990 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773236] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45428 doesn't match qid 00:18:13.562 [2024-10-17 17:42:51.773245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:5 sqhd:5990 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773251] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45428 doesn't match qid 00:18:13.562 [2024-10-17 17:42:51.773259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:5 sqhd:5990 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773265] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 45428 doesn't match qid 00:18:13.562 [2024-10-17 17:42:51.773273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32674 cdw0:5 sqhd:5990 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773282] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0840 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773309] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773323] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773337] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773357] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773370] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:18:13.562 [2024-10-17 17:42:51.773376] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:18:13.562 [2024-10-17 17:42:51.773382] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773390] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773421] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773434] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773443] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773469] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773482] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773491] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773517] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773529] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773538] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773564] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773576] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773585] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773612] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773624] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773633] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773661] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773673] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773682] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773710] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773722] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773731] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773757] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773769] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773778] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773806] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773818] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773827] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773852] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773864] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773873] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773903] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:13.562 [2024-10-17 17:42:51.773915] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773924] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.562 [2024-10-17 17:42:51.773931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.562 [2024-10-17 17:42:51.773949] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.562 [2024-10-17 17:42:51.773955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.773961] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.773970] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.773979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.773999] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774011] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774020] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774046] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774058] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf740 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774067] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774094] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774106] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf768 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774115] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774139] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774151] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf790 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774160] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774183] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774195] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b8 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774204] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774212] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774228] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774240] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7e0 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774250] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774274] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774286] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf808 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774294] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774322] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774334] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf830 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774343] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774369] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774380] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf858 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774389] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774422] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774434] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf880 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774443] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774473] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774485] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a8 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774494] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774516] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774528] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8d0 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774538] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774567] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774579] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f8 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774588] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.563 [2024-10-17 17:42:51.774618] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.563 [2024-10-17 17:42:51.774624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:18:13.563 [2024-10-17 17:42:51.774630] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf920 length 0x10 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774639] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.563 [2024-10-17 17:42:51.774647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.774663] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.774668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.774675] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf948 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774683] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.774707] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.774713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.774719] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf970 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774728] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.774752] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.774757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.774764] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf998 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774772] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.774798] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.774804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.774811] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9c0 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774820] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.774846] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.774851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.774858] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e8 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774867] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.774894] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.774900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.774906] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa10 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774915] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.774939] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.774944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.774951] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa38 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774960] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.774967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.774987] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.774993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.774999] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa60 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775008] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.775036] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.775041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.775048] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa88 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775057] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.775079] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.775084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.775092] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfab0 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775101] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.775125] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.775130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.775137] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf600 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775145] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.775173] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.775179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.775185] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf628 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775194] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.775220] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.775225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.775232] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf650 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775241] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.775264] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.775270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.775276] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf678 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775285] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.775313] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.775319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.775325] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6a0 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775334] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.775364] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.775371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.775377] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c8 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775386] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.775394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.775414] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.779425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.779433] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6f0 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.779442] nvme_rdma.c:2293:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0700 length 0x40 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.779450] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:18:13.564 [2024-10-17 17:42:51.779469] nvme_rdma.c:2496:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:18:13.564 [2024-10-17 17:42:51.779474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0006 p:0 m:0 dnr:0 00:18:13.564 [2024-10-17 17:42:51.779481] nvme_rdma.c:2389:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf718 length 0x10 lkey 0x181e00 00:18:13.564 [2024-10-17 17:42:51.779488] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:18:13.564 Used: 0% 00:18:13.564 Data Units Read: 0 00:18:13.564 Data Units Written: 0 00:18:13.564 Host Read Commands: 0 00:18:13.564 Host Write Commands: 0 00:18:13.564 Controller Busy Time: 0 minutes 00:18:13.564 Power Cycles: 0 00:18:13.564 Power On Hours: 0 hours 00:18:13.564 Unsafe Shutdowns: 0 00:18:13.564 Unrecoverable Media Errors: 0 00:18:13.564 Lifetime Error Log Entries: 0 00:18:13.564 Warning Temperature Time: 0 minutes 00:18:13.564 Critical Temperature Time: 0 minutes 00:18:13.564 00:18:13.564 Number of Queues 00:18:13.564 ================ 00:18:13.564 Number of I/O Submission Queues: 127 00:18:13.564 Number of I/O Completion Queues: 127 00:18:13.564 00:18:13.565 Active Namespaces 00:18:13.565 ================= 00:18:13.565 Namespace ID:1 00:18:13.565 Error Recovery Timeout: Unlimited 00:18:13.565 Command Set Identifier: NVM (00h) 00:18:13.565 Deallocate: Supported 00:18:13.565 Deallocated/Unwritten Error: Not Supported 00:18:13.565 Deallocated Read Value: Unknown 00:18:13.565 Deallocate in Write Zeroes: Not Supported 00:18:13.565 Deallocated Guard Field: 0xFFFF 00:18:13.565 Flush: Supported 00:18:13.565 Reservation: Supported 00:18:13.565 Namespace Sharing Capabilities: Multiple Controllers 00:18:13.565 Size (in LBAs): 131072 (0GiB) 00:18:13.565 Capacity (in LBAs): 131072 (0GiB) 00:18:13.565 Utilization (in LBAs): 131072 (0GiB) 00:18:13.565 NGUID: ABCDEF0123456789ABCDEF0123456789 00:18:13.565 EUI64: ABCDEF0123456789 00:18:13.565 UUID: d24999d0-8b70-4eb9-9702-fcb703ec4fd7 00:18:13.565 Thin Provisioning: Not Supported 00:18:13.565 Per-NS Atomic Units: Yes 00:18:13.565 Atomic Boundary Size (Normal): 0 00:18:13.565 Atomic Boundary Size (PFail): 0 00:18:13.565 Atomic Boundary Offset: 0 00:18:13.565 Maximum Single Source Range Length: 65535 00:18:13.565 Maximum Copy Length: 65535 00:18:13.565 Maximum Source Range Count: 1 00:18:13.565 NGUID/EUI64 Never Reused: No 00:18:13.565 Namespace Write Protected: No 00:18:13.565 Number of LBA Formats: 1 00:18:13.565 Current LBA Format: LBA Format #00 00:18:13.565 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:13.565 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:13.565 rmmod nvme_rdma 00:18:13.565 rmmod nvme_fabrics 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 670293 ']' 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 670293 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 670293 ']' 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 670293 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 670293 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:13.565 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 670293' 00:18:13.565 killing process with pid 670293 00:18:13.823 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 670293 00:18:13.823 17:42:51 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 670293 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:14.082 00:18:14.082 real 0m8.216s 00:18:14.082 user 0m6.216s 00:18:14.082 sys 0m5.720s 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:18:14.082 ************************************ 00:18:14.082 END TEST nvmf_identify 00:18:14.082 ************************************ 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:14.082 ************************************ 00:18:14.082 START TEST nvmf_perf 00:18:14.082 ************************************ 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:18:14.082 * Looking for test storage... 00:18:14.082 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:18:14.082 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:14.341 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:14.341 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.341 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.341 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:14.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.342 --rc genhtml_branch_coverage=1 00:18:14.342 --rc genhtml_function_coverage=1 00:18:14.342 --rc genhtml_legend=1 00:18:14.342 --rc geninfo_all_blocks=1 00:18:14.342 --rc geninfo_unexecuted_blocks=1 00:18:14.342 00:18:14.342 ' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:14.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.342 --rc genhtml_branch_coverage=1 00:18:14.342 --rc genhtml_function_coverage=1 00:18:14.342 --rc genhtml_legend=1 00:18:14.342 --rc geninfo_all_blocks=1 00:18:14.342 --rc geninfo_unexecuted_blocks=1 00:18:14.342 00:18:14.342 ' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:14.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.342 --rc genhtml_branch_coverage=1 00:18:14.342 --rc genhtml_function_coverage=1 00:18:14.342 --rc genhtml_legend=1 00:18:14.342 --rc geninfo_all_blocks=1 00:18:14.342 --rc geninfo_unexecuted_blocks=1 00:18:14.342 00:18:14.342 ' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:14.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.342 --rc genhtml_branch_coverage=1 00:18:14.342 --rc genhtml_function_coverage=1 00:18:14.342 --rc genhtml_legend=1 00:18:14.342 --rc geninfo_all_blocks=1 00:18:14.342 --rc geninfo_unexecuted_blocks=1 00:18:14.342 00:18:14.342 ' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.342 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:18:14.342 17:42:52 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.905 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:18:20.905 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:18:20.906 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:20.906 Found net devices under 0000:18:00.0: mlx_0_0 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:20.906 Found net devices under 0000:18:00.1: mlx_0_1 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # rdma_device_init 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # uname 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:20.906 17:42:58 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:20.906 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:20.906 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:18:20.906 altname enp24s0f0np0 00:18:20.906 altname ens785f0np0 00:18:20.906 inet 192.168.100.8/24 scope global mlx_0_0 00:18:20.906 valid_lft forever preferred_lft forever 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:20.906 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:20.906 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:18:20.906 altname enp24s0f1np1 00:18:20.906 altname ens785f1np1 00:18:20.906 inet 192.168.100.9/24 scope global mlx_0_1 00:18:20.906 valid_lft forever preferred_lft forever 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@109 -- # continue 2 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:20.906 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:20.907 192.168.100.9' 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:20.907 192.168.100.9' 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # head -n 1 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:20.907 192.168.100.9' 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # tail -n +2 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # head -n 1 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=673392 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 673392 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 673392 ']' 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:20.907 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:20.907 [2024-10-17 17:42:59.260482] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:18:20.907 [2024-10-17 17:42:59.260545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.165 [2024-10-17 17:42:59.332054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:21.165 [2024-10-17 17:42:59.379960] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.165 [2024-10-17 17:42:59.380001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.165 [2024-10-17 17:42:59.380011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.165 [2024-10-17 17:42:59.380020] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.165 [2024-10-17 17:42:59.380027] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.165 [2024-10-17 17:42:59.381270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:21.165 [2024-10-17 17:42:59.381360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:21.165 [2024-10-17 17:42:59.381464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:21.165 [2024-10-17 17:42:59.381466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.165 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:21.165 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:18:21.165 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:18:21.165 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:21.165 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:21.165 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.165 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:18:21.165 17:42:59 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:18:22.538 17:43:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:18:22.538 17:43:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:18:22.538 17:43:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:18:22.538 17:43:00 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:22.795 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:22.795 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:18:22.795 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:22.795 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:18:22.795 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:18:23.053 [2024-10-17 17:43:01.308457] rdma.c:2735:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:18:23.053 [2024-10-17 17:43:01.328546] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1139ca0/0x11d8200) succeed. 00:18:23.053 [2024-10-17 17:43:01.339255] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x113c390/0x1143130) succeed. 00:18:23.311 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:23.311 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:23.311 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:23.568 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:23.568 17:43:01 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:23.825 17:43:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:24.084 [2024-10-17 17:43:02.251218] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:24.084 17:43:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:24.342 17:43:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:18:24.342 17:43:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:18:24.342 17:43:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:24.342 17:43:02 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:18:25.715 Initializing NVMe Controllers 00:18:25.715 Attached to NVMe Controller at 0000:5e:00.0 [144d:a80a] 00:18:25.715 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:18:25.715 Initialization complete. Launching workers. 00:18:25.715 ======================================================== 00:18:25.715 Latency(us) 00:18:25.715 Device Information : IOPS MiB/s Average min max 00:18:25.715 PCIE (0000:5e:00.0) NSID 1 from core 0: 95366.24 372.52 335.00 12.24 4671.14 00:18:25.715 ======================================================== 00:18:25.715 Total : 95366.24 372.52 335.00 12.24 4671.14 00:18:25.715 00:18:25.715 17:43:03 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:29.028 Initializing NVMe Controllers 00:18:29.028 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:29.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:29.028 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:29.028 Initialization complete. Launching workers. 00:18:29.028 ======================================================== 00:18:29.028 Latency(us) 00:18:29.028 Device Information : IOPS MiB/s Average min max 00:18:29.028 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6797.96 26.55 146.29 48.11 7049.80 00:18:29.028 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5039.97 19.69 198.20 74.94 7084.70 00:18:29.028 ======================================================== 00:18:29.028 Total : 11837.93 46.24 168.39 48.11 7084.70 00:18:29.028 00:18:29.028 17:43:07 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:32.309 Initializing NVMe Controllers 00:18:32.310 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:32.310 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:32.310 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:32.310 Initialization complete. Launching workers. 00:18:32.310 ======================================================== 00:18:32.310 Latency(us) 00:18:32.310 Device Information : IOPS MiB/s Average min max 00:18:32.310 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17935.00 70.06 1786.52 482.99 7231.83 00:18:32.310 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.42 5941.35 10147.58 00:18:32.310 ======================================================== 00:18:32.310 Total : 21967.00 85.81 2921.75 482.99 10147.58 00:18:32.310 00:18:32.310 17:43:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:18:32.310 17:43:10 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:18:36.494 Initializing NVMe Controllers 00:18:36.494 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.494 Controller IO queue size 128, less than required. 00:18:36.494 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:36.494 Controller IO queue size 128, less than required. 00:18:36.494 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:36.494 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:36.494 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:36.494 Initialization complete. Launching workers. 00:18:36.494 ======================================================== 00:18:36.494 Latency(us) 00:18:36.494 Device Information : IOPS MiB/s Average min max 00:18:36.494 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3936.00 984.00 32717.99 14434.54 72879.45 00:18:36.494 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3978.00 994.50 31917.53 13801.37 49720.17 00:18:36.494 ======================================================== 00:18:36.494 Total : 7914.00 1978.50 32315.64 13801.37 72879.45 00:18:36.494 00:18:36.752 17:43:14 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:18:37.010 No valid NVMe controllers or AIO or URING devices found 00:18:37.010 Initializing NVMe Controllers 00:18:37.010 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:37.010 Controller IO queue size 128, less than required. 00:18:37.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:37.010 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:37.010 Controller IO queue size 128, less than required. 00:18:37.010 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:37.010 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:18:37.010 WARNING: Some requested NVMe devices were skipped 00:18:37.010 17:43:15 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:18:41.193 Initializing NVMe Controllers 00:18:41.193 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:18:41.193 Controller IO queue size 128, less than required. 00:18:41.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:41.193 Controller IO queue size 128, less than required. 00:18:41.193 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:41.193 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:41.193 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:41.193 Initialization complete. Launching workers. 00:18:41.193 00:18:41.193 ==================== 00:18:41.193 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:41.193 RDMA transport: 00:18:41.193 dev name: mlx5_0 00:18:41.193 polls: 392891 00:18:41.194 idle_polls: 389725 00:18:41.194 completions: 43270 00:18:41.194 queued_requests: 1 00:18:41.194 total_send_wrs: 21635 00:18:41.194 send_doorbell_updates: 2912 00:18:41.194 total_recv_wrs: 21762 00:18:41.194 recv_doorbell_updates: 2916 00:18:41.194 --------------------------------- 00:18:41.194 00:18:41.194 ==================== 00:18:41.194 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:41.194 RDMA transport: 00:18:41.194 dev name: mlx5_0 00:18:41.194 polls: 395982 00:18:41.194 idle_polls: 395722 00:18:41.194 completions: 19438 00:18:41.194 queued_requests: 1 00:18:41.194 total_send_wrs: 9719 00:18:41.194 send_doorbell_updates: 248 00:18:41.194 total_recv_wrs: 9846 00:18:41.194 recv_doorbell_updates: 252 00:18:41.194 --------------------------------- 00:18:41.194 ======================================================== 00:18:41.194 Latency(us) 00:18:41.194 Device Information : IOPS MiB/s Average min max 00:18:41.194 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5402.54 1350.63 23656.55 11497.99 62005.16 00:18:41.194 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2426.82 606.71 52501.59 31073.75 78117.01 00:18:41.194 ======================================================== 00:18:41.194 Total : 7829.36 1957.34 32597.48 11497.99 78117.01 00:18:41.194 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.451 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:41.451 rmmod nvme_rdma 00:18:41.451 rmmod nvme_fabrics 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 673392 ']' 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 673392 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 673392 ']' 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 673392 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 673392 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 673392' 00:18:41.709 killing process with pid 673392 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 673392 00:18:41.709 17:43:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 673392 00:18:44.237 17:43:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:44.238 00:18:44.238 real 0m29.703s 00:18:44.238 user 1m32.676s 00:18:44.238 sys 0m6.658s 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:44.238 ************************************ 00:18:44.238 END TEST nvmf_perf 00:18:44.238 ************************************ 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.238 ************************************ 00:18:44.238 START TEST nvmf_fio_host 00:18:44.238 ************************************ 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:18:44.238 * Looking for test storage... 00:18:44.238 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:44.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.238 --rc genhtml_branch_coverage=1 00:18:44.238 --rc genhtml_function_coverage=1 00:18:44.238 --rc genhtml_legend=1 00:18:44.238 --rc geninfo_all_blocks=1 00:18:44.238 --rc geninfo_unexecuted_blocks=1 00:18:44.238 00:18:44.238 ' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:44.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.238 --rc genhtml_branch_coverage=1 00:18:44.238 --rc genhtml_function_coverage=1 00:18:44.238 --rc genhtml_legend=1 00:18:44.238 --rc geninfo_all_blocks=1 00:18:44.238 --rc geninfo_unexecuted_blocks=1 00:18:44.238 00:18:44.238 ' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:44.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.238 --rc genhtml_branch_coverage=1 00:18:44.238 --rc genhtml_function_coverage=1 00:18:44.238 --rc genhtml_legend=1 00:18:44.238 --rc geninfo_all_blocks=1 00:18:44.238 --rc geninfo_unexecuted_blocks=1 00:18:44.238 00:18:44.238 ' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:44.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.238 --rc genhtml_branch_coverage=1 00:18:44.238 --rc genhtml_function_coverage=1 00:18:44.238 --rc genhtml_legend=1 00:18:44.238 --rc geninfo_all_blocks=1 00:18:44.238 --rc geninfo_unexecuted_blocks=1 00:18:44.238 00:18:44.238 ' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.238 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.239 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:18:44.239 17:43:22 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.800 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:18:50.801 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:18:50.801 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:18:50.801 Found net devices under 0000:18:00.0: mlx_0_0 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:18:50.801 Found net devices under 0000:18:00.1: mlx_0_1 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # rdma_device_init 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # uname 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@528 -- # allocate_nic_ips 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:18:50.801 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:50.801 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:18:50.801 altname enp24s0f0np0 00:18:50.801 altname ens785f0np0 00:18:50.801 inet 192.168.100.8/24 scope global mlx_0_0 00:18:50.801 valid_lft forever preferred_lft forever 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:18:50.801 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:50.801 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:18:50.801 altname enp24s0f1np1 00:18:50.801 altname ens785f1np1 00:18:50.801 inet 192.168.100.9/24 scope global mlx_0_1 00:18:50.801 valid_lft forever preferred_lft forever 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:50.801 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@109 -- # continue 2 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:18:50.802 192.168.100.9' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:18:50.802 192.168.100.9' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # head -n 1 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:18:50.802 192.168.100.9' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # tail -n +2 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # head -n 1 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=679298 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 679298 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 679298 ']' 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:50.802 [2024-10-17 17:43:28.694999] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:18:50.802 [2024-10-17 17:43:28.695061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.802 [2024-10-17 17:43:28.764642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:50.802 [2024-10-17 17:43:28.811853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.802 [2024-10-17 17:43:28.811898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.802 [2024-10-17 17:43:28.811907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.802 [2024-10-17 17:43:28.811916] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.802 [2024-10-17 17:43:28.811923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.802 [2024-10-17 17:43:28.813343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.802 [2024-10-17 17:43:28.813371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.802 [2024-10-17 17:43:28.813453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:50.802 [2024-10-17 17:43:28.813454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:50.802 17:43:28 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:50.802 [2024-10-17 17:43:29.119605] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x183b2c0/0x183f7b0) succeed. 00:18:50.802 [2024-10-17 17:43:29.130019] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x183c950/0x1880e50) succeed. 00:18:51.061 17:43:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:51.061 17:43:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:51.061 17:43:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.061 17:43:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:51.319 Malloc1 00:18:51.319 17:43:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.576 17:43:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:51.576 17:43:29 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:51.834 [2024-10-17 17:43:30.132919] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:51.834 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:52.091 17:43:30 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:18:52.349 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:52.349 fio-3.35 00:18:52.349 Starting 1 thread 00:18:54.979 00:18:54.979 test: (groupid=0, jobs=1): err= 0: pid=679690: Thu Oct 17 17:43:33 2024 00:18:54.979 read: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2004msec) 00:18:54.979 slat (nsec): min=1383, max=35287, avg=1554.27, stdev=477.84 00:18:54.979 clat (usec): min=1690, max=6496, avg=3644.98, stdev=83.12 00:18:54.979 lat (usec): min=1705, max=6498, avg=3646.53, stdev=83.04 00:18:54.979 clat percentiles (usec): 00:18:54.979 | 1.00th=[ 3589], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:18:54.979 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3654], 00:18:54.979 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3687], 00:18:54.979 | 99.00th=[ 3785], 99.50th=[ 3916], 99.90th=[ 4686], 99.95th=[ 5538], 00:18:54.979 | 99.99th=[ 6456] 00:18:54.979 bw ( KiB/s): min=68104, max=70416, per=100.00%, avg=69694.00, stdev=1075.11, samples=4 00:18:54.979 iops : min=17026, max=17604, avg=17423.50, stdev=268.78, samples=4 00:18:54.979 write: IOPS=17.4k, BW=68.1MiB/s (71.4MB/s)(137MiB/2004msec); 0 zone resets 00:18:54.979 slat (nsec): min=1426, max=22444, avg=1889.37, stdev=526.67 00:18:54.979 clat (usec): min=2504, max=6523, avg=3643.76, stdev=93.79 00:18:54.979 lat (usec): min=2515, max=6525, avg=3645.65, stdev=93.73 00:18:54.979 clat percentiles (usec): 00:18:54.979 | 1.00th=[ 3589], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:18:54.979 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3654], 00:18:54.979 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3687], 00:18:54.979 | 99.00th=[ 3785], 99.50th=[ 3982], 99.90th=[ 4686], 99.95th=[ 5997], 00:18:54.979 | 99.99th=[ 6521] 00:18:54.979 bw ( KiB/s): min=68280, max=70376, per=100.00%, avg=69774.00, stdev=999.61, samples=4 00:18:54.979 iops : min=17070, max=17594, avg=17443.50, stdev=249.90, samples=4 00:18:54.979 lat (msec) : 2=0.01%, 4=99.55%, 10=0.45% 00:18:54.979 cpu : usr=99.45%, sys=0.10%, ctx=16, majf=0, minf=3 00:18:54.980 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:54.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.980 issued rwts: total=34908,34953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.980 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.980 00:18:54.980 Run status group 0 (all jobs): 00:18:54.980 READ: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2004-2004msec 00:18:54.980 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=137MiB (143MB), run=2004-2004msec 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:54.980 17:43:33 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:18:54.980 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:54.980 fio-3.35 00:18:54.980 Starting 1 thread 00:18:57.508 00:18:57.508 test: (groupid=0, jobs=1): err= 0: pid=680147: Thu Oct 17 17:43:35 2024 00:18:57.508 read: IOPS=14.3k, BW=223MiB/s (234MB/s)(442MiB/1983msec) 00:18:57.508 slat (nsec): min=2289, max=50787, avg=2668.90, stdev=1330.92 00:18:57.508 clat (usec): min=518, max=9742, avg=1604.16, stdev=1283.81 00:18:57.508 lat (usec): min=520, max=9749, avg=1606.83, stdev=1284.40 00:18:57.508 clat percentiles (usec): 00:18:57.508 | 1.00th=[ 693], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 930], 00:18:57.508 | 30.00th=[ 1004], 40.00th=[ 1090], 50.00th=[ 1188], 60.00th=[ 1303], 00:18:57.508 | 70.00th=[ 1434], 80.00th=[ 1631], 90.00th=[ 3261], 95.00th=[ 5014], 00:18:57.508 | 99.00th=[ 6783], 99.50th=[ 7308], 99.90th=[ 8455], 99.95th=[ 9110], 00:18:57.508 | 99.99th=[ 9634] 00:18:57.508 bw ( KiB/s): min=111840, max=116256, per=49.53%, avg=113136.00, stdev=2090.48, samples=4 00:18:57.508 iops : min= 6990, max= 7266, avg=7071.00, stdev=130.65, samples=4 00:18:57.508 write: IOPS=8048, BW=126MiB/s (132MB/s)(229MiB/1824msec); 0 zone resets 00:18:57.508 slat (usec): min=26, max=141, avg=29.90, stdev= 5.95 00:18:57.508 clat (usec): min=4302, max=20224, avg=12836.19, stdev=1900.06 00:18:57.508 lat (usec): min=4329, max=20253, avg=12866.09, stdev=1899.70 00:18:57.508 clat percentiles (usec): 00:18:57.508 | 1.00th=[ 7963], 5.00th=[10028], 10.00th=[10552], 20.00th=[11338], 00:18:57.508 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:18:57.508 | 70.00th=[13698], 80.00th=[14353], 90.00th=[15270], 95.00th=[15926], 00:18:57.508 | 99.00th=[17171], 99.50th=[17957], 99.90th=[19006], 99.95th=[19530], 00:18:57.508 | 99.99th=[20055] 00:18:57.508 bw ( KiB/s): min=112928, max=120256, per=90.54%, avg=116592.00, stdev=3018.93, samples=4 00:18:57.508 iops : min= 7058, max= 7516, avg=7287.00, stdev=188.68, samples=4 00:18:57.508 lat (usec) : 750=1.93%, 1000=17.90% 00:18:57.508 lat (msec) : 2=38.02%, 4=2.45%, 10=7.31%, 20=32.40%, 50=0.01% 00:18:57.508 cpu : usr=97.06%, sys=1.70%, ctx=184, majf=0, minf=2 00:18:57.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:18:57.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.508 issued rwts: total=28307,14681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.508 00:18:57.508 Run status group 0 (all jobs): 00:18:57.508 READ: bw=223MiB/s (234MB/s), 223MiB/s-223MiB/s (234MB/s-234MB/s), io=442MiB (464MB), run=1983-1983msec 00:18:57.508 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=229MiB (241MB), run=1824-1824msec 00:18:57.508 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:18:57.767 rmmod nvme_rdma 00:18:57.767 rmmod nvme_fabrics 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 679298 ']' 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 679298 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 679298 ']' 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 679298 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:57.767 17:43:35 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 679298 00:18:57.767 17:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:57.767 17:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:57.767 17:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 679298' 00:18:57.767 killing process with pid 679298 00:18:57.767 17:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 679298 00:18:57.767 17:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 679298 00:18:58.025 17:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:18:58.025 17:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:18:58.025 00:18:58.025 real 0m14.220s 00:18:58.025 user 0m42.550s 00:18:58.025 sys 0m5.855s 00:18:58.025 17:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:58.025 17:43:36 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.025 ************************************ 00:18:58.025 END TEST nvmf_fio_host 00:18:58.025 ************************************ 00:18:58.025 17:43:36 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:18:58.025 17:43:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:58.025 17:43:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:58.025 17:43:36 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:58.025 ************************************ 00:18:58.025 START TEST nvmf_failover 00:18:58.025 ************************************ 00:18:58.025 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:18:58.284 * Looking for test storage... 00:18:58.284 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.284 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:58.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.284 --rc genhtml_branch_coverage=1 00:18:58.284 --rc genhtml_function_coverage=1 00:18:58.284 --rc genhtml_legend=1 00:18:58.284 --rc geninfo_all_blocks=1 00:18:58.285 --rc geninfo_unexecuted_blocks=1 00:18:58.285 00:18:58.285 ' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:58.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.285 --rc genhtml_branch_coverage=1 00:18:58.285 --rc genhtml_function_coverage=1 00:18:58.285 --rc genhtml_legend=1 00:18:58.285 --rc geninfo_all_blocks=1 00:18:58.285 --rc geninfo_unexecuted_blocks=1 00:18:58.285 00:18:58.285 ' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:58.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.285 --rc genhtml_branch_coverage=1 00:18:58.285 --rc genhtml_function_coverage=1 00:18:58.285 --rc genhtml_legend=1 00:18:58.285 --rc geninfo_all_blocks=1 00:18:58.285 --rc geninfo_unexecuted_blocks=1 00:18:58.285 00:18:58.285 ' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:58.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.285 --rc genhtml_branch_coverage=1 00:18:58.285 --rc genhtml_function_coverage=1 00:18:58.285 --rc genhtml_legend=1 00:18:58.285 --rc geninfo_all_blocks=1 00:18:58.285 --rc geninfo_unexecuted_blocks=1 00:18:58.285 00:18:58.285 ' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:58.285 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:18:58.285 17:43:36 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:19:04.845 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:19:04.845 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:04.845 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:04.846 Found net devices under 0000:18:00.0: mlx_0_0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:04.846 Found net devices under 0000:18:00.1: mlx_0_1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # rdma_device_init 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # uname 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:04.846 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.846 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:19:04.846 altname enp24s0f0np0 00:19:04.846 altname ens785f0np0 00:19:04.846 inet 192.168.100.8/24 scope global mlx_0_0 00:19:04.846 valid_lft forever preferred_lft forever 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:04.846 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:04.846 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:19:04.846 altname enp24s0f1np1 00:19:04.846 altname ens785f1np1 00:19:04.846 inet 192.168.100.9/24 scope global mlx_0_1 00:19:04.846 valid_lft forever preferred_lft forever 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@109 -- # continue 2 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:04.846 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:04.846 192.168.100.9' 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:05.104 192.168.100.9' 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # head -n 1 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:05.104 192.168.100.9' 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # tail -n +2 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # head -n 1 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=683442 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 683442 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 683442 ']' 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.104 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:05.104 [2024-10-17 17:43:43.341401] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:19:05.104 [2024-10-17 17:43:43.341470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:05.104 [2024-10-17 17:43:43.414196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:05.104 [2024-10-17 17:43:43.461280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.104 [2024-10-17 17:43:43.461321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.105 [2024-10-17 17:43:43.461331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:05.105 [2024-10-17 17:43:43.461340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:05.105 [2024-10-17 17:43:43.461348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.105 [2024-10-17 17:43:43.462570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.105 [2024-10-17 17:43:43.462633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:05.105 [2024-10-17 17:43:43.462635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.362 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.362 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:05.362 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:05.362 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:05.363 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:05.363 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.363 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:05.620 [2024-10-17 17:43:43.814614] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x125aab0/0x125efa0) succeed. 00:19:05.620 [2024-10-17 17:43:43.824702] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x125c0a0/0x12a0640) succeed. 00:19:05.620 17:43:43 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:05.878 Malloc0 00:19:05.878 17:43:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:06.135 17:43:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:06.393 17:43:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:06.393 [2024-10-17 17:43:44.762617] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:06.651 17:43:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:06.651 [2024-10-17 17:43:44.959100] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:06.651 17:43:44 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:06.909 [2024-10-17 17:43:45.147733] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=683658 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 683658 /var/tmp/bdevperf.sock 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 683658 ']' 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.909 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:07.167 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.167 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:07.167 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:07.425 NVMe0n1 00:19:07.425 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:07.683 00:19:07.683 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=683835 00:19:07.683 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.683 17:43:45 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:08.616 17:43:46 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:08.873 17:43:47 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:12.152 17:43:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:12.152 00:19:12.152 17:43:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:12.410 17:43:50 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:15.687 17:43:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:15.687 [2024-10-17 17:43:53.875971] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:15.687 17:43:53 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:16.619 17:43:54 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:16.876 17:43:55 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 683835 00:19:23.440 { 00:19:23.440 "results": [ 00:19:23.440 { 00:19:23.440 "job": "NVMe0n1", 00:19:23.440 "core_mask": "0x1", 00:19:23.440 "workload": "verify", 00:19:23.440 "status": "finished", 00:19:23.440 "verify_range": { 00:19:23.440 "start": 0, 00:19:23.440 "length": 16384 00:19:23.440 }, 00:19:23.440 "queue_depth": 128, 00:19:23.440 "io_size": 4096, 00:19:23.440 "runtime": 15.005735, 00:19:23.440 "iops": 14071.353385888795, 00:19:23.440 "mibps": 54.966224163628105, 00:19:23.440 "io_failed": 4213, 00:19:23.440 "io_timeout": 0, 00:19:23.440 "avg_latency_us": 8896.16819172071, 00:19:23.440 "min_latency_us": 343.7078260869565, 00:19:23.440 "max_latency_us": 1050399.6104347827 00:19:23.440 } 00:19:23.440 ], 00:19:23.440 "core_count": 1 00:19:23.440 } 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 683658 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 683658 ']' 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 683658 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 683658 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 683658' 00:19:23.440 killing process with pid 683658 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 683658 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 683658 00:19:23.440 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:23.440 [2024-10-17 17:43:45.217535] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:19:23.440 [2024-10-17 17:43:45.217595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683658 ] 00:19:23.440 [2024-10-17 17:43:45.290686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.440 [2024-10-17 17:43:45.334739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.440 Running I/O for 15 seconds... 00:19:23.440 17792.00 IOPS, 69.50 MiB/s [2024-10-17T15:44:01.831Z] 9679.50 IOPS, 37.81 MiB/s [2024-10-17T15:44:01.831Z] [2024-10-17 17:43:48.178928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181500 00:19:23.440 [2024-10-17 17:43:48.178966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.440 [2024-10-17 17:43:48.178987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181500 00:19:23.440 [2024-10-17 17:43:48.178998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:23824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:24080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181500 00:19:23.441 [2024-10-17 17:43:48.179718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.441 [2024-10-17 17:43:48.179729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.179989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.179998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.442 [2024-10-17 17:43:48.180432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181500 00:19:23.442 [2024-10-17 17:43:48.180441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x181500 00:19:23.443 [2024-10-17 17:43:48.180919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.180939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.180959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.180978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.180989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.180997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.181009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.181019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.181029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.181038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.181048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.181057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.181067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.181076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.181087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.181095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.181106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.181114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.181125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.181134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.181145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.181153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.443 [2024-10-17 17:43:48.181164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.443 [2024-10-17 17:43:48.181173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.181506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:48.181515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.192154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.444 [2024-10-17 17:43:48.192169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.444 [2024-10-17 17:43:48.192178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24816 len:8 PRP1 0x0 PRP2 0x0 00:19:23.444 [2024-10-17 17:43:48.192188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.192232] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000168e4900 was disconnected and freed. reset controller. 00:19:23.444 [2024-10-17 17:43:48.192245] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:19:23.444 [2024-10-17 17:43:48.192256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.444 [2024-10-17 17:43:48.192296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.444 [2024-10-17 17:43:48.192308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:10cfe60 sqhd:50b0 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.192319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.444 [2024-10-17 17:43:48.192329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:10cfe60 sqhd:50b0 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.192339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.444 [2024-10-17 17:43:48.192348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:10cfe60 sqhd:50b0 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.192358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:23.444 [2024-10-17 17:43:48.192367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32764 cdw0:10cfe60 sqhd:50b0 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:48.209563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.444 [2024-10-17 17:43:48.209582] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:23.444 [2024-10-17 17:43:48.209593] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:23.444 [2024-10-17 17:43:48.212394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:23.444 [2024-10-17 17:43:48.254715] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:23.444 11377.00 IOPS, 44.44 MiB/s [2024-10-17T15:44:01.835Z] 12963.75 IOPS, 50.64 MiB/s [2024-10-17T15:44:01.835Z] 12368.20 IOPS, 48.31 MiB/s [2024-10-17T15:44:01.835Z] [2024-10-17 17:43:51.664347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.444 [2024-10-17 17:43:51.664634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181f00 00:19:23.444 [2024-10-17 17:43:51.664654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.444 [2024-10-17 17:43:51.664665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181f00 00:19:23.444 [2024-10-17 17:43:51.664675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:113176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.664982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.664993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181f00 00:19:23.445 [2024-10-17 17:43:51.665263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.445 [2024-10-17 17:43:51.665274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.665729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.665987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.665998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.446 [2024-10-17 17:43:51.666007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.666018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x181f00 00:19:23.446 [2024-10-17 17:43:51.666027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.446 [2024-10-17 17:43:51.666038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:113704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.447 [2024-10-17 17:43:51.666733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.447 [2024-10-17 17:43:51.666784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181f00 00:19:23.447 [2024-10-17 17:43:51.666795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.666806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181f00 00:19:23.448 [2024-10-17 17:43:51.666815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.666825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181f00 00:19:23.448 [2024-10-17 17:43:51.666836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.666847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181f00 00:19:23.448 [2024-10-17 17:43:51.666856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.666867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x181f00 00:19:23.448 [2024-10-17 17:43:51.666876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.666887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181f00 00:19:23.448 [2024-10-17 17:43:51.666896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.666907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181f00 00:19:23.448 [2024-10-17 17:43:51.666916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.666927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181f00 00:19:23.448 [2024-10-17 17:43:51.666936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.666947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181f00 00:19:23.448 [2024-10-17 17:43:51.666956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.668115] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.448 [2024-10-17 17:43:51.668130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.448 [2024-10-17 17:43:51.668139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113816 len:8 PRP1 0x0 PRP2 0x0 00:19:23.448 [2024-10-17 17:43:51.668148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:51.668190] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000168e4840 was disconnected and freed. reset controller. 00:19:23.448 [2024-10-17 17:43:51.668203] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:19:23.448 [2024-10-17 17:43:51.668213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.448 [2024-10-17 17:43:51.671034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:23.448 [2024-10-17 17:43:51.684958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.448 [2024-10-17 17:43:51.727097] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:23.448 11405.50 IOPS, 44.55 MiB/s [2024-10-17T15:44:01.839Z] 12354.43 IOPS, 48.26 MiB/s [2024-10-17T15:44:01.839Z] 13065.25 IOPS, 51.04 MiB/s [2024-10-17T15:44:01.839Z] 13600.11 IOPS, 53.13 MiB/s [2024-10-17T15:44:01.839Z] [2024-10-17 17:43:56.098830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.098869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.098889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.098899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.098910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.098920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.098931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.098940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.098951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.098960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.098971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.098980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.098992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181500 00:19:23.448 [2024-10-17 17:43:56.099001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:84864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181500 00:19:23.448 [2024-10-17 17:43:56.099022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:84872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181500 00:19:23.448 [2024-10-17 17:43:56.099042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181500 00:19:23.448 [2024-10-17 17:43:56.099063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:84888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181500 00:19:23.448 [2024-10-17 17:43:56.099087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181500 00:19:23.448 [2024-10-17 17:43:56.099108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181500 00:19:23.448 [2024-10-17 17:43:56.099128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:84912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181500 00:19:23.448 [2024-10-17 17:43:56.099148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.099168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.099188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.099208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.099229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.099249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.099269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.448 [2024-10-17 17:43:56.099280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.448 [2024-10-17 17:43:56.099289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.449 [2024-10-17 17:43:56.099310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.449 [2024-10-17 17:43:56.099332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.449 [2024-10-17 17:43:56.099352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.449 [2024-10-17 17:43:56.099372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.449 [2024-10-17 17:43:56.099391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.449 [2024-10-17 17:43:56.099411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.449 [2024-10-17 17:43:56.099437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.449 [2024-10-17 17:43:56.099456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.449 [2024-10-17 17:43:56.099476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:84952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:84960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:84968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:84976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:84984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:85080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181500 00:19:23.449 [2024-10-17 17:43:56.099935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.449 [2024-10-17 17:43:56.099945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.099954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.099965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.099974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.099985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.099994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c4000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c6000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ca000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043cc000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ce000 len:0x1000 key:0x181500 00:19:23.450 [2024-10-17 17:43:56.100272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.450 [2024-10-17 17:43:56.100669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.450 [2024-10-17 17:43:56.100679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181500 00:19:23.451 [2024-10-17 17:43:56.100767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181500 00:19:23.451 [2024-10-17 17:43:56.100788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.100984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.100996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.101388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:23.451 [2024-10-17 17:43:56.101397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6582f000 sqhd:7250 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.102541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:23.451 [2024-10-17 17:43:56.102555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:23.451 [2024-10-17 17:43:56.102563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85872 len:8 PRP1 0x0 PRP2 0x0 00:19:23.451 [2024-10-17 17:43:56.102572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:23.451 [2024-10-17 17:43:56.102615] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000168e4840 was disconnected and freed. reset controller. 00:19:23.451 [2024-10-17 17:43:56.102627] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:19:23.451 [2024-10-17 17:43:56.102638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.452 [2024-10-17 17:43:56.106721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:23.452 12240.10 IOPS, 47.81 MiB/s [2024-10-17T15:44:01.843Z] [2024-10-17 17:43:56.120309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:23.452 [2024-10-17 17:43:56.160669] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:23.452 12669.00 IOPS, 49.49 MiB/s [2024-10-17T15:44:01.843Z] 13108.08 IOPS, 51.20 MiB/s [2024-10-17T15:44:01.843Z] 13480.54 IOPS, 52.66 MiB/s [2024-10-17T15:44:01.843Z] 13795.50 IOPS, 53.89 MiB/s [2024-10-17T15:44:01.843Z] 14071.00 IOPS, 54.96 MiB/s 00:19:23.452 Latency(us) 00:19:23.452 [2024-10-17T15:44:01.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.452 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:23.452 Verification LBA range: start 0x0 length 0x4000 00:19:23.452 NVMe0n1 : 15.01 14071.35 54.97 280.76 0.00 8896.17 343.71 1050399.61 00:19:23.452 [2024-10-17T15:44:01.843Z] =================================================================================================================== 00:19:23.452 [2024-10-17T15:44:01.843Z] Total : 14071.35 54.97 280.76 0.00 8896.17 343.71 1050399.61 00:19:23.452 Received shutdown signal, test time was about 15.000000 seconds 00:19:23.452 00:19:23.452 Latency(us) 00:19:23.452 [2024-10-17T15:44:01.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.452 [2024-10-17T15:44:01.843Z] =================================================================================================================== 00:19:23.452 [2024-10-17T15:44:01.843Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=685837 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 685837 /var/tmp/bdevperf.sock 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 685837 ']' 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:23.452 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:23.452 [2024-10-17 17:44:01.823291] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:23.710 17:44:01 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:19:23.710 [2024-10-17 17:44:02.023981] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:19:23.710 17:44:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:23.968 NVMe0n1 00:19:23.968 17:44:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:24.226 00:19:24.226 17:44:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:24.484 00:19:24.484 17:44:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:24.484 17:44:02 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:24.742 17:44:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:24.999 17:44:03 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:28.281 17:44:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.281 17:44:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:28.281 17:44:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=686471 00:19:28.281 17:44:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:28.281 17:44:06 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 686471 00:19:29.214 { 00:19:29.214 "results": [ 00:19:29.214 { 00:19:29.214 "job": "NVMe0n1", 00:19:29.214 "core_mask": "0x1", 00:19:29.214 "workload": "verify", 00:19:29.214 "status": "finished", 00:19:29.214 "verify_range": { 00:19:29.214 "start": 0, 00:19:29.214 "length": 16384 00:19:29.214 }, 00:19:29.214 "queue_depth": 128, 00:19:29.214 "io_size": 4096, 00:19:29.214 "runtime": 1.00833, 00:19:29.214 "iops": 17771.959576725872, 00:19:29.214 "mibps": 69.42171709658544, 00:19:29.214 "io_failed": 0, 00:19:29.214 "io_timeout": 0, 00:19:29.214 "avg_latency_us": 7163.2886459627325, 00:19:29.214 "min_latency_us": 2564.4521739130437, 00:19:29.214 "max_latency_us": 11226.601739130434 00:19:29.214 } 00:19:29.214 ], 00:19:29.214 "core_count": 1 00:19:29.214 } 00:19:29.214 17:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:29.214 [2024-10-17 17:44:01.435176] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:19:29.214 [2024-10-17 17:44:01.435237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid685837 ] 00:19:29.214 [2024-10-17 17:44:01.508248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.214 [2024-10-17 17:44:01.549547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.214 [2024-10-17 17:44:03.214846] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:19:29.214 [2024-10-17 17:44:03.215337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:29.214 [2024-10-17 17:44:03.215371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:29.214 [2024-10-17 17:44:03.236855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:29.214 [2024-10-17 17:44:03.253268] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:29.214 Running I/O for 1 seconds... 00:19:29.214 17764.00 IOPS, 69.39 MiB/s 00:19:29.215 Latency(us) 00:19:29.215 [2024-10-17T15:44:07.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.215 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:29.215 Verification LBA range: start 0x0 length 0x4000 00:19:29.215 NVMe0n1 : 1.01 17771.96 69.42 0.00 0.00 7163.29 2564.45 11226.60 00:19:29.215 [2024-10-17T15:44:07.606Z] =================================================================================================================== 00:19:29.215 [2024-10-17T15:44:07.606Z] Total : 17771.96 69.42 0.00 0.00 7163.29 2564.45 11226.60 00:19:29.215 17:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:29.215 17:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:29.472 17:44:07 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:29.730 17:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:29.730 17:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:29.988 17:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:30.246 17:44:08 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 685837 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 685837 ']' 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 685837 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 685837 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 685837' 00:19:33.527 killing process with pid 685837 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 685837 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 685837 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:33.527 17:44:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:19:33.785 rmmod nvme_rdma 00:19:33.785 rmmod nvme_fabrics 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 683442 ']' 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 683442 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 683442 ']' 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 683442 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.785 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 683442 00:19:34.043 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:34.043 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:34.043 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 683442' 00:19:34.043 killing process with pid 683442 00:19:34.043 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 683442 00:19:34.043 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 683442 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:19:34.301 00:19:34.301 real 0m36.081s 00:19:34.301 user 1m59.451s 00:19:34.301 sys 0m7.469s 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:34.301 ************************************ 00:19:34.301 END TEST nvmf_failover 00:19:34.301 ************************************ 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.301 ************************************ 00:19:34.301 START TEST nvmf_host_discovery 00:19:34.301 ************************************ 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:19:34.301 * Looking for test storage... 00:19:34.301 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:19:34.301 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.560 --rc genhtml_branch_coverage=1 00:19:34.560 --rc genhtml_function_coverage=1 00:19:34.560 --rc genhtml_legend=1 00:19:34.560 --rc geninfo_all_blocks=1 00:19:34.560 --rc geninfo_unexecuted_blocks=1 00:19:34.560 00:19:34.560 ' 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.560 --rc genhtml_branch_coverage=1 00:19:34.560 --rc genhtml_function_coverage=1 00:19:34.560 --rc genhtml_legend=1 00:19:34.560 --rc geninfo_all_blocks=1 00:19:34.560 --rc geninfo_unexecuted_blocks=1 00:19:34.560 00:19:34.560 ' 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.560 --rc genhtml_branch_coverage=1 00:19:34.560 --rc genhtml_function_coverage=1 00:19:34.560 --rc genhtml_legend=1 00:19:34.560 --rc geninfo_all_blocks=1 00:19:34.560 --rc geninfo_unexecuted_blocks=1 00:19:34.560 00:19:34.560 ' 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.560 --rc genhtml_branch_coverage=1 00:19:34.560 --rc genhtml_function_coverage=1 00:19:34.560 --rc genhtml_legend=1 00:19:34.560 --rc geninfo_all_blocks=1 00:19:34.560 --rc geninfo_unexecuted_blocks=1 00:19:34.560 00:19:34.560 ' 00:19:34.560 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.561 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:34.561 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:19:34.561 00:19:34.561 real 0m0.213s 00:19:34.561 user 0m0.127s 00:19:34.561 sys 0m0.103s 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.561 ************************************ 00:19:34.561 END TEST nvmf_host_discovery 00:19:34.561 ************************************ 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:34.561 ************************************ 00:19:34.561 START TEST nvmf_host_multipath_status 00:19:34.561 ************************************ 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:19:34.561 * Looking for test storage... 00:19:34.561 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:19:34.561 17:44:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.821 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:34.821 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.821 --rc genhtml_branch_coverage=1 00:19:34.821 --rc genhtml_function_coverage=1 00:19:34.821 --rc genhtml_legend=1 00:19:34.821 --rc geninfo_all_blocks=1 00:19:34.821 --rc geninfo_unexecuted_blocks=1 00:19:34.821 00:19:34.822 ' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:34.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.822 --rc genhtml_branch_coverage=1 00:19:34.822 --rc genhtml_function_coverage=1 00:19:34.822 --rc genhtml_legend=1 00:19:34.822 --rc geninfo_all_blocks=1 00:19:34.822 --rc geninfo_unexecuted_blocks=1 00:19:34.822 00:19:34.822 ' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:34.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.822 --rc genhtml_branch_coverage=1 00:19:34.822 --rc genhtml_function_coverage=1 00:19:34.822 --rc genhtml_legend=1 00:19:34.822 --rc geninfo_all_blocks=1 00:19:34.822 --rc geninfo_unexecuted_blocks=1 00:19:34.822 00:19:34.822 ' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:34.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.822 --rc genhtml_branch_coverage=1 00:19:34.822 --rc genhtml_function_coverage=1 00:19:34.822 --rc genhtml_legend=1 00:19:34.822 --rc geninfo_all_blocks=1 00:19:34.822 --rc geninfo_unexecuted_blocks=1 00:19:34.822 00:19:34.822 ' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.822 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # prepare_net_devs 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # local -g is_hw=no 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # remove_spdk_ns 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.822 17:44:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:19:41.397 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:19:41.397 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:19:41.397 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:19:41.398 Found net devices under 0000:18:00.0: mlx_0_0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:19:41.398 Found net devices under 0000:18:00.1: mlx_0_1 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # is_hw=yes 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # rdma_device_init 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # uname 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe ib_cm 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe ib_core 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe ib_umad 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@70 -- # modprobe iw_cm 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@528 -- # allocate_nic_ips 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # get_rdma_if_list 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:19:41.398 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:41.398 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:19:41.398 altname enp24s0f0np0 00:19:41.398 altname ens785f0np0 00:19:41.398 inet 192.168.100.8/24 scope global mlx_0_0 00:19:41.398 valid_lft forever preferred_lft forever 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:19:41.398 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:41.398 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:19:41.398 altname enp24s0f1np1 00:19:41.398 altname ens785f1np1 00:19:41.398 inet 192.168.100.9/24 scope global mlx_0_1 00:19:41.398 valid_lft forever preferred_lft forever 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # return 0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # get_rdma_if_list 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@108 -- # echo mlx_0_1 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@109 -- # continue 2 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:41.398 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # awk '{print $4}' 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # cut -d/ -f1 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:19:41.399 192.168.100.9' 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:19:41.399 192.168.100.9' 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # head -n 1 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:19:41.399 192.168.100.9' 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # tail -n +2 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # head -n 1 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:41.399 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:41.657 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # nvmfpid=690316 00:19:41.657 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:41.657 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # waitforlisten 690316 00:19:41.657 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 690316 ']' 00:19:41.657 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.657 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.657 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.657 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.657 17:44:19 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:41.657 [2024-10-17 17:44:19.839867] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:19:41.657 [2024-10-17 17:44:19.839929] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.657 [2024-10-17 17:44:19.910745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:41.657 [2024-10-17 17:44:19.953477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.657 [2024-10-17 17:44:19.953523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.657 [2024-10-17 17:44:19.953533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.657 [2024-10-17 17:44:19.953557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.657 [2024-10-17 17:44:19.953565] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.657 [2024-10-17 17:44:19.954654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.657 [2024-10-17 17:44:19.954657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.915 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.915 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:41.915 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:19:41.915 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:41.915 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:41.915 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.915 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=690316 00:19:41.915 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:41.915 [2024-10-17 17:44:20.298262] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1801bc0/0x18060b0) succeed. 00:19:42.173 [2024-10-17 17:44:20.307513] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1803110/0x1847750) succeed. 00:19:42.173 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:42.431 Malloc0 00:19:42.431 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:42.431 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:42.689 17:44:20 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:42.947 [2024-10-17 17:44:21.174222] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:42.947 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:43.205 [2024-10-17 17:44:21.362693] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=690524 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 690524 /var/tmp/bdevperf.sock 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 690524 ']' 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.205 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:43.463 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.463 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:43.463 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:43.463 17:44:21 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:44.029 Nvme0n1 00:19:44.029 17:44:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:44.029 Nvme0n1 00:19:44.029 17:44:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:44.029 17:44:22 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:46.559 17:44:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:46.559 17:44:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:19:46.559 17:44:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:19:46.559 17:44:24 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:47.491 17:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:47.491 17:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:47.491 17:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.491 17:44:25 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:47.749 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:47.749 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:47.749 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:47.749 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:48.007 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:48.007 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:48.007 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:48.007 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.265 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:48.265 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:48.265 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.265 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:48.523 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:48.523 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:48.523 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.523 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:48.523 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:48.523 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:48.523 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:48.523 17:44:26 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:48.781 17:44:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:48.781 17:44:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:48.781 17:44:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:49.039 17:44:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:19:49.297 17:44:27 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:50.237 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:50.237 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:50.237 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.237 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:50.536 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:50.536 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:50.536 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:50.536 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.536 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.536 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:50.536 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.536 17:44:28 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:50.851 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.851 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:50.851 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.851 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:51.129 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.129 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:51.129 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:51.129 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.129 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.129 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:51.129 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.129 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:51.388 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.388 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:51.388 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:51.646 17:44:29 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:19:51.905 17:44:30 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:52.840 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:52.840 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:52.840 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.840 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:53.098 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.098 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:53.098 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.098 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:53.098 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:53.098 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:53.098 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.098 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:53.357 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.357 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:53.357 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:53.357 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.614 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.614 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:53.614 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.614 17:44:31 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:53.872 17:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:53.872 17:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:53.872 17:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:53.872 17:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.130 17:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.130 17:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:54.130 17:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:19:54.130 17:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:19:54.389 17:44:32 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:55.324 17:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:55.324 17:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:55.324 17:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.324 17:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:55.583 17:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.583 17:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:55.583 17:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.583 17:44:33 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:55.842 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:55.842 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:55.842 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.842 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:56.100 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.100 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:56.100 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.101 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:56.101 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.101 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:56.101 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.101 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:56.359 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:56.359 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:56.359 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.359 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:56.618 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:56.618 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:56.618 17:44:34 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:19:56.877 17:44:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:19:56.877 17:44:35 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:58.257 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.517 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.517 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:58.517 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.517 17:44:36 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:58.776 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.776 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:58.776 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.776 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:59.040 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:59.040 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:59.040 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.040 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:59.040 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:59.040 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:59.040 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:19:59.302 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:19:59.561 17:44:37 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:00.499 17:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:00.499 17:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:00.499 17:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.499 17:44:38 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:00.758 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:00.758 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:00.758 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:00.758 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.018 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.018 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:01.018 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.018 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:01.018 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.018 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:01.018 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.018 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:01.278 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.278 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:01.278 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.278 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:01.537 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:01.537 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:01.537 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.537 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:01.795 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.795 17:44:39 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:02.054 17:44:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:02.054 17:44:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:20:02.054 17:44:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:02.314 17:44:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:03.251 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:03.251 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:03.251 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.251 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:03.510 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.510 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:03.510 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.510 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:03.769 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.769 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:03.769 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:03.769 17:44:41 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.028 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.028 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:04.029 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.029 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:04.029 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.029 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:04.029 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.029 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:04.287 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.288 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:04.288 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:04.288 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.614 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.614 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:04.614 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:04.871 17:44:42 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:20:04.871 17:44:43 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:05.805 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:05.805 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:05.805 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.805 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:06.064 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:06.064 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:06.064 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.064 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:06.324 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.324 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:06.324 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.324 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:06.582 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.582 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:06.583 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.583 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:06.583 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.583 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:06.583 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.583 17:44:44 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:06.841 17:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.841 17:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:06.841 17:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.841 17:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:07.099 17:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.099 17:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:07.099 17:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:07.359 17:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:20:07.617 17:44:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:08.552 17:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:08.552 17:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:08.552 17:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.552 17:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:08.810 17:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.810 17:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:08.811 17:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.811 17:44:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:08.811 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:08.811 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:08.811 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.811 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:09.068 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.068 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:09.068 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.068 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:09.326 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.326 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:09.326 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.326 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:09.585 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.585 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:09.585 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.585 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:09.585 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.585 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:09.585 17:44:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:20:09.844 17:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:20:10.104 17:44:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:11.040 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:11.040 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:11.040 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:11.040 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.299 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.299 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:11.299 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.299 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:11.558 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:11.558 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:11.558 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:11.558 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.817 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.817 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:11.817 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.817 17:44:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:11.817 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:11.817 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:11.817 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:11.817 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:12.076 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.076 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:12.076 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.076 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 690524 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 690524 ']' 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 690524 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 690524 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 690524' 00:20:12.335 killing process with pid 690524 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 690524 00:20:12.335 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 690524 00:20:12.335 { 00:20:12.335 "results": [ 00:20:12.335 { 00:20:12.335 "job": "Nvme0n1", 00:20:12.335 "core_mask": "0x4", 00:20:12.335 "workload": "verify", 00:20:12.335 "status": "terminated", 00:20:12.335 "verify_range": { 00:20:12.335 "start": 0, 00:20:12.335 "length": 16384 00:20:12.335 }, 00:20:12.335 "queue_depth": 128, 00:20:12.335 "io_size": 4096, 00:20:12.335 "runtime": 28.094476, 00:20:12.335 "iops": 15737.257388249562, 00:20:12.335 "mibps": 61.47366167284985, 00:20:12.335 "io_failed": 0, 00:20:12.335 "io_timeout": 0, 00:20:12.335 "avg_latency_us": 8114.302622069645, 00:20:12.335 "min_latency_us": 662.4834782608696, 00:20:12.335 "max_latency_us": 3019898.88 00:20:12.335 } 00:20:12.335 ], 00:20:12.335 "core_count": 1 00:20:12.335 } 00:20:12.597 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 690524 00:20:12.597 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:12.597 [2024-10-17 17:44:21.426550] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:20:12.597 [2024-10-17 17:44:21.426615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690524 ] 00:20:12.597 [2024-10-17 17:44:21.493239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.597 [2024-10-17 17:44:21.538345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.597 Running I/O for 90 seconds... 00:20:12.597 18277.00 IOPS, 71.39 MiB/s [2024-10-17T15:44:50.988Z] 18432.00 IOPS, 72.00 MiB/s [2024-10-17T15:44:50.988Z] 18466.00 IOPS, 72.13 MiB/s [2024-10-17T15:44:50.988Z] 18432.00 IOPS, 72.00 MiB/s [2024-10-17T15:44:50.988Z] 18420.80 IOPS, 71.96 MiB/s [2024-10-17T15:44:50.988Z] 18474.17 IOPS, 72.16 MiB/s [2024-10-17T15:44:50.988Z] 18446.43 IOPS, 72.06 MiB/s [2024-10-17T15:44:50.988Z] 18416.00 IOPS, 71.94 MiB/s [2024-10-17T15:44:50.988Z] 18390.22 IOPS, 71.84 MiB/s [2024-10-17T15:44:50.988Z] 18388.70 IOPS, 71.83 MiB/s [2024-10-17T15:44:50.988Z] 18389.55 IOPS, 71.83 MiB/s [2024-10-17T15:44:50.988Z] 18381.75 IOPS, 71.80 MiB/s [2024-10-17T15:44:50.988Z] [2024-10-17 17:44:35.014266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x182100 00:20:12.597 [2024-10-17 17:44:35.014310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:12.597 [2024-10-17 17:44:35.014348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182100 00:20:12.597 [2024-10-17 17:44:35.014359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:12.597 [2024-10-17 17:44:35.014374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x182100 00:20:12.597 [2024-10-17 17:44:35.014384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:12.597 [2024-10-17 17:44:35.014396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x182100 00:20:12.597 [2024-10-17 17:44:35.014406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:12.597 [2024-10-17 17:44:35.014424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x182100 00:20:12.597 [2024-10-17 17:44:35.014434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.598 [2024-10-17 17:44:35.014739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.598 [2024-10-17 17:44:35.014761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.014986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.014996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:12.598 [2024-10-17 17:44:35.015267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x182100 00:20:12.598 [2024-10-17 17:44:35.015276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.015984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.015996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.016005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.016016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.016025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.016037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.016046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.016058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.016067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.016079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x182100 00:20:12.599 [2024-10-17 17:44:35.016088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:12.599 [2024-10-17 17:44:35.016100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:1928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x182100 00:20:12.600 [2024-10-17 17:44:35.016737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.016764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.016787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.016797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:12.600 [2024-10-17 17:44:35.017678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.600 [2024-10-17 17:44:35.017687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:35.017704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:35.017713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:35.017729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:35.017740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:35.017757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:35.017767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:35.017783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:35.017792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:35.017809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:35.017818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:35.017835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:35.017844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:12.601 17656.62 IOPS, 68.97 MiB/s [2024-10-17T15:44:50.992Z] 16395.43 IOPS, 64.04 MiB/s [2024-10-17T15:44:50.992Z] 15302.40 IOPS, 59.77 MiB/s [2024-10-17T15:44:50.992Z] 14919.94 IOPS, 58.28 MiB/s [2024-10-17T15:44:50.992Z] 15126.65 IOPS, 59.09 MiB/s [2024-10-17T15:44:50.992Z] 15272.28 IOPS, 59.66 MiB/s [2024-10-17T15:44:50.992Z] 15248.16 IOPS, 59.56 MiB/s [2024-10-17T15:44:50.992Z] 15220.90 IOPS, 59.46 MiB/s [2024-10-17T15:44:50.992Z] 15298.90 IOPS, 59.76 MiB/s [2024-10-17T15:44:50.992Z] 15448.68 IOPS, 60.35 MiB/s [2024-10-17T15:44:50.992Z] 15581.87 IOPS, 60.87 MiB/s [2024-10-17T15:44:50.992Z] 15581.46 IOPS, 60.87 MiB/s [2024-10-17T15:44:50.992Z] 15545.16 IOPS, 60.72 MiB/s [2024-10-17T15:44:50.992Z] [2024-10-17 17:44:48.362132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.362176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.362238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:57608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.362261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.362816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.362839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:57680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.362861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043de000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.362887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.362909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.362930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:57752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.362951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e4000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.362971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.362983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.362993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.363014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.363034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.363055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.363076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.363097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.363118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.363141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.363162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.363183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.363204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.363225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.363246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:57760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x182100 00:20:12.601 [2024-10-17 17:44:48.363267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.601 [2024-10-17 17:44:48.363287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:12.601 [2024-10-17 17:44:48.363301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:57800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:57816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ec000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d6000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c8000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043dc000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043e6000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ee000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.363958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.363990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.363999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.364011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.364020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.364032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:58080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043d8000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.364041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.364053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043f4000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.364062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.364074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.364083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.364095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:12.602 [2024-10-17 17:44:48.364104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.364116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.364125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:12.602 [2024-10-17 17:44:48.364136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x182100 00:20:12.602 [2024-10-17 17:44:48.364146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:12.602 15535.23 IOPS, 60.68 MiB/s [2024-10-17T15:44:50.993Z] 15642.59 IOPS, 61.10 MiB/s [2024-10-17T15:44:50.993Z] 15734.75 IOPS, 61.46 MiB/s [2024-10-17T15:44:50.993Z] Received shutdown signal, test time was about 28.095123 seconds 00:20:12.602 00:20:12.602 Latency(us) 00:20:12.602 [2024-10-17T15:44:50.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:12.602 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:12.602 Verification LBA range: start 0x0 length 0x4000 00:20:12.602 Nvme0n1 : 28.09 15737.26 61.47 0.00 0.00 8114.30 662.48 3019898.88 00:20:12.602 [2024-10-17T15:44:50.994Z] =================================================================================================================== 00:20:12.603 [2024-10-17T15:44:50.994Z] Total : 15737.26 61.47 0.00 0.00 8114.30 662.48 3019898.88 00:20:12.603 17:44:50 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:12.862 rmmod nvme_rdma 00:20:12.862 rmmod nvme_fabrics 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@515 -- # '[' -n 690316 ']' 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # killprocess 690316 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 690316 ']' 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 690316 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 690316 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 690316' 00:20:12.862 killing process with pid 690316 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 690316 00:20:12.862 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 690316 00:20:13.121 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:13.121 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:20:13.121 00:20:13.121 real 0m38.546s 00:20:13.121 user 1m50.085s 00:20:13.121 sys 0m9.259s 00:20:13.121 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:13.121 17:44:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:13.121 ************************************ 00:20:13.121 END TEST nvmf_host_multipath_status 00:20:13.121 ************************************ 00:20:13.121 17:44:51 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:20:13.121 17:44:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:13.121 17:44:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:13.121 17:44:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.121 ************************************ 00:20:13.121 START TEST nvmf_discovery_remove_ifc 00:20:13.121 ************************************ 00:20:13.121 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:20:13.382 * Looking for test storage... 00:20:13.382 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:13.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.382 --rc genhtml_branch_coverage=1 00:20:13.382 --rc genhtml_function_coverage=1 00:20:13.382 --rc genhtml_legend=1 00:20:13.382 --rc geninfo_all_blocks=1 00:20:13.382 --rc geninfo_unexecuted_blocks=1 00:20:13.382 00:20:13.382 ' 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:13.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.382 --rc genhtml_branch_coverage=1 00:20:13.382 --rc genhtml_function_coverage=1 00:20:13.382 --rc genhtml_legend=1 00:20:13.382 --rc geninfo_all_blocks=1 00:20:13.382 --rc geninfo_unexecuted_blocks=1 00:20:13.382 00:20:13.382 ' 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:13.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.382 --rc genhtml_branch_coverage=1 00:20:13.382 --rc genhtml_function_coverage=1 00:20:13.382 --rc genhtml_legend=1 00:20:13.382 --rc geninfo_all_blocks=1 00:20:13.382 --rc geninfo_unexecuted_blocks=1 00:20:13.382 00:20:13.382 ' 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:13.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.382 --rc genhtml_branch_coverage=1 00:20:13.382 --rc genhtml_function_coverage=1 00:20:13.382 --rc genhtml_legend=1 00:20:13.382 --rc geninfo_all_blocks=1 00:20:13.382 --rc geninfo_unexecuted_blocks=1 00:20:13.382 00:20:13.382 ' 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.382 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.383 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:20:13.383 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:20:13.383 00:20:13.383 real 0m0.235s 00:20:13.383 user 0m0.127s 00:20:13.383 sys 0m0.125s 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:13.383 ************************************ 00:20:13.383 END TEST nvmf_discovery_remove_ifc 00:20:13.383 ************************************ 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:13.383 17:44:51 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:13.642 ************************************ 00:20:13.642 START TEST nvmf_identify_kernel_target 00:20:13.643 ************************************ 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:20:13.643 * Looking for test storage... 00:20:13.643 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:13.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.643 --rc genhtml_branch_coverage=1 00:20:13.643 --rc genhtml_function_coverage=1 00:20:13.643 --rc genhtml_legend=1 00:20:13.643 --rc geninfo_all_blocks=1 00:20:13.643 --rc geninfo_unexecuted_blocks=1 00:20:13.643 00:20:13.643 ' 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:13.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.643 --rc genhtml_branch_coverage=1 00:20:13.643 --rc genhtml_function_coverage=1 00:20:13.643 --rc genhtml_legend=1 00:20:13.643 --rc geninfo_all_blocks=1 00:20:13.643 --rc geninfo_unexecuted_blocks=1 00:20:13.643 00:20:13.643 ' 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:13.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.643 --rc genhtml_branch_coverage=1 00:20:13.643 --rc genhtml_function_coverage=1 00:20:13.643 --rc genhtml_legend=1 00:20:13.643 --rc geninfo_all_blocks=1 00:20:13.643 --rc geninfo_unexecuted_blocks=1 00:20:13.643 00:20:13.643 ' 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:13.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:13.643 --rc genhtml_branch_coverage=1 00:20:13.643 --rc genhtml_function_coverage=1 00:20:13.643 --rc genhtml_legend=1 00:20:13.643 --rc geninfo_all_blocks=1 00:20:13.643 --rc geninfo_unexecuted_blocks=1 00:20:13.643 00:20:13.643 ' 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:13.643 17:44:51 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:13.643 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:13.644 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:13.644 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:13.902 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:13.902 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:13.902 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:13.902 17:44:52 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:20:20.466 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:20.466 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:20:20.467 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:20.467 Found net devices under 0000:18:00.0: mlx_0_0 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:20.467 Found net devices under 0000:18:00.1: mlx_0_1 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # is_hw=yes 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # rdma_device_init 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # uname 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@528 -- # allocate_nic_ips 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:20.467 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:20.467 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:20:20.467 altname enp24s0f0np0 00:20:20.467 altname ens785f0np0 00:20:20.467 inet 192.168.100.8/24 scope global mlx_0_0 00:20:20.467 valid_lft forever preferred_lft forever 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:20.467 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:20.467 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:20:20.467 altname enp24s0f1np1 00:20:20.467 altname ens785f1np1 00:20:20.467 inet 192.168.100.9/24 scope global mlx_0_1 00:20:20.467 valid_lft forever preferred_lft forever 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # return 0 00:20:20.467 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:20.726 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@109 -- # continue 2 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:20:20.727 192.168.100.9' 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:20:20.727 192.168.100.9' 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # head -n 1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:20:20.727 192.168.100.9' 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # head -n 1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # tail -n +2 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@767 -- # local ip 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # local block nvme 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:20:20.727 17:44:58 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # modprobe nvmet 00:20:20.727 17:44:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:20.727 17:44:59 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:20:24.910 Waiting for block devices as requested 00:20:24.910 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:20:24.910 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:20:24.910 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:24.910 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:24.910 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:24.910 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:24.910 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:24.910 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:24.910 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:25.169 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:25.169 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:20:25.169 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:25.427 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:25.427 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:25.427 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:25.685 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:25.685 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:25.685 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:25.942 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:25.942 No valid GPT data, bailing 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:25.942 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:20:25.942 No valid GPT data, bailing 00:20:25.943 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme2n1 ]] 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # is_block_zoned nvme2n1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # block_in_use nvme2n1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme2n1 pt 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:20:26.201 No valid GPT data, bailing 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # nvme=/dev/nvme2n1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # [[ -b /dev/nvme2n1 ]] 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # echo /dev/nvme2n1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 192.168.100.8 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo rdma 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 4420 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo ipv4 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:26.201 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -t rdma -s 4420 00:20:26.201 00:20:26.201 Discovery Log Number of Records 2, Generation counter 2 00:20:26.201 =====Discovery Log Entry 0====== 00:20:26.201 trtype: rdma 00:20:26.201 adrfam: ipv4 00:20:26.201 subtype: current discovery subsystem 00:20:26.201 treq: not specified, sq flow control disable supported 00:20:26.201 portid: 1 00:20:26.201 trsvcid: 4420 00:20:26.201 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:26.201 traddr: 192.168.100.8 00:20:26.201 eflags: none 00:20:26.201 rdma_prtype: not specified 00:20:26.201 rdma_qptype: connected 00:20:26.201 rdma_cms: rdma-cm 00:20:26.201 rdma_pkey: 0x0000 00:20:26.201 =====Discovery Log Entry 1====== 00:20:26.201 trtype: rdma 00:20:26.201 adrfam: ipv4 00:20:26.201 subtype: nvme subsystem 00:20:26.201 treq: not specified, sq flow control disable supported 00:20:26.201 portid: 1 00:20:26.201 trsvcid: 4420 00:20:26.201 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:26.201 traddr: 192.168.100.8 00:20:26.201 eflags: none 00:20:26.201 rdma_prtype: not specified 00:20:26.201 rdma_qptype: connected 00:20:26.201 rdma_cms: rdma-cm 00:20:26.201 rdma_pkey: 0x0000 00:20:26.460 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:20:26.460 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:26.460 ===================================================== 00:20:26.460 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:26.460 ===================================================== 00:20:26.460 Controller Capabilities/Features 00:20:26.460 ================================ 00:20:26.460 Vendor ID: 0000 00:20:26.460 Subsystem Vendor ID: 0000 00:20:26.460 Serial Number: 821d20e3717796595dde 00:20:26.460 Model Number: Linux 00:20:26.460 Firmware Version: 6.8.9-20 00:20:26.460 Recommended Arb Burst: 0 00:20:26.460 IEEE OUI Identifier: 00 00 00 00:20:26.460 Multi-path I/O 00:20:26.460 May have multiple subsystem ports: No 00:20:26.460 May have multiple controllers: No 00:20:26.460 Associated with SR-IOV VF: No 00:20:26.460 Max Data Transfer Size: Unlimited 00:20:26.460 Max Number of Namespaces: 0 00:20:26.460 Max Number of I/O Queues: 1024 00:20:26.460 NVMe Specification Version (VS): 1.3 00:20:26.460 NVMe Specification Version (Identify): 1.3 00:20:26.460 Maximum Queue Entries: 128 00:20:26.460 Contiguous Queues Required: No 00:20:26.460 Arbitration Mechanisms Supported 00:20:26.460 Weighted Round Robin: Not Supported 00:20:26.460 Vendor Specific: Not Supported 00:20:26.460 Reset Timeout: 7500 ms 00:20:26.460 Doorbell Stride: 4 bytes 00:20:26.460 NVM Subsystem Reset: Not Supported 00:20:26.460 Command Sets Supported 00:20:26.460 NVM Command Set: Supported 00:20:26.460 Boot Partition: Not Supported 00:20:26.460 Memory Page Size Minimum: 4096 bytes 00:20:26.460 Memory Page Size Maximum: 4096 bytes 00:20:26.460 Persistent Memory Region: Not Supported 00:20:26.460 Optional Asynchronous Events Supported 00:20:26.460 Namespace Attribute Notices: Not Supported 00:20:26.460 Firmware Activation Notices: Not Supported 00:20:26.460 ANA Change Notices: Not Supported 00:20:26.460 PLE Aggregate Log Change Notices: Not Supported 00:20:26.460 LBA Status Info Alert Notices: Not Supported 00:20:26.460 EGE Aggregate Log Change Notices: Not Supported 00:20:26.460 Normal NVM Subsystem Shutdown event: Not Supported 00:20:26.460 Zone Descriptor Change Notices: Not Supported 00:20:26.460 Discovery Log Change Notices: Supported 00:20:26.460 Controller Attributes 00:20:26.460 128-bit Host Identifier: Not Supported 00:20:26.460 Non-Operational Permissive Mode: Not Supported 00:20:26.460 NVM Sets: Not Supported 00:20:26.461 Read Recovery Levels: Not Supported 00:20:26.461 Endurance Groups: Not Supported 00:20:26.461 Predictable Latency Mode: Not Supported 00:20:26.461 Traffic Based Keep ALive: Not Supported 00:20:26.461 Namespace Granularity: Not Supported 00:20:26.461 SQ Associations: Not Supported 00:20:26.461 UUID List: Not Supported 00:20:26.461 Multi-Domain Subsystem: Not Supported 00:20:26.461 Fixed Capacity Management: Not Supported 00:20:26.461 Variable Capacity Management: Not Supported 00:20:26.461 Delete Endurance Group: Not Supported 00:20:26.461 Delete NVM Set: Not Supported 00:20:26.461 Extended LBA Formats Supported: Not Supported 00:20:26.461 Flexible Data Placement Supported: Not Supported 00:20:26.461 00:20:26.461 Controller Memory Buffer Support 00:20:26.461 ================================ 00:20:26.461 Supported: No 00:20:26.461 00:20:26.461 Persistent Memory Region Support 00:20:26.461 ================================ 00:20:26.461 Supported: No 00:20:26.461 00:20:26.461 Admin Command Set Attributes 00:20:26.461 ============================ 00:20:26.461 Security Send/Receive: Not Supported 00:20:26.461 Format NVM: Not Supported 00:20:26.461 Firmware Activate/Download: Not Supported 00:20:26.461 Namespace Management: Not Supported 00:20:26.461 Device Self-Test: Not Supported 00:20:26.461 Directives: Not Supported 00:20:26.461 NVMe-MI: Not Supported 00:20:26.461 Virtualization Management: Not Supported 00:20:26.461 Doorbell Buffer Config: Not Supported 00:20:26.461 Get LBA Status Capability: Not Supported 00:20:26.461 Command & Feature Lockdown Capability: Not Supported 00:20:26.461 Abort Command Limit: 1 00:20:26.461 Async Event Request Limit: 1 00:20:26.461 Number of Firmware Slots: N/A 00:20:26.461 Firmware Slot 1 Read-Only: N/A 00:20:26.461 Firmware Activation Without Reset: N/A 00:20:26.461 Multiple Update Detection Support: N/A 00:20:26.461 Firmware Update Granularity: No Information Provided 00:20:26.461 Per-Namespace SMART Log: No 00:20:26.461 Asymmetric Namespace Access Log Page: Not Supported 00:20:26.461 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:26.461 Command Effects Log Page: Not Supported 00:20:26.461 Get Log Page Extended Data: Supported 00:20:26.461 Telemetry Log Pages: Not Supported 00:20:26.461 Persistent Event Log Pages: Not Supported 00:20:26.461 Supported Log Pages Log Page: May Support 00:20:26.461 Commands Supported & Effects Log Page: Not Supported 00:20:26.461 Feature Identifiers & Effects Log Page:May Support 00:20:26.461 NVMe-MI Commands & Effects Log Page: May Support 00:20:26.461 Data Area 4 for Telemetry Log: Not Supported 00:20:26.461 Error Log Page Entries Supported: 1 00:20:26.461 Keep Alive: Not Supported 00:20:26.461 00:20:26.461 NVM Command Set Attributes 00:20:26.461 ========================== 00:20:26.461 Submission Queue Entry Size 00:20:26.461 Max: 1 00:20:26.461 Min: 1 00:20:26.461 Completion Queue Entry Size 00:20:26.461 Max: 1 00:20:26.461 Min: 1 00:20:26.461 Number of Namespaces: 0 00:20:26.461 Compare Command: Not Supported 00:20:26.461 Write Uncorrectable Command: Not Supported 00:20:26.461 Dataset Management Command: Not Supported 00:20:26.461 Write Zeroes Command: Not Supported 00:20:26.461 Set Features Save Field: Not Supported 00:20:26.461 Reservations: Not Supported 00:20:26.461 Timestamp: Not Supported 00:20:26.461 Copy: Not Supported 00:20:26.461 Volatile Write Cache: Not Present 00:20:26.461 Atomic Write Unit (Normal): 1 00:20:26.461 Atomic Write Unit (PFail): 1 00:20:26.461 Atomic Compare & Write Unit: 1 00:20:26.461 Fused Compare & Write: Not Supported 00:20:26.461 Scatter-Gather List 00:20:26.461 SGL Command Set: Supported 00:20:26.461 SGL Keyed: Supported 00:20:26.461 SGL Bit Bucket Descriptor: Not Supported 00:20:26.461 SGL Metadata Pointer: Not Supported 00:20:26.461 Oversized SGL: Not Supported 00:20:26.461 SGL Metadata Address: Not Supported 00:20:26.461 SGL Offset: Supported 00:20:26.461 Transport SGL Data Block: Not Supported 00:20:26.461 Replay Protected Memory Block: Not Supported 00:20:26.461 00:20:26.461 Firmware Slot Information 00:20:26.461 ========================= 00:20:26.461 Active slot: 0 00:20:26.461 00:20:26.461 00:20:26.461 Error Log 00:20:26.461 ========= 00:20:26.461 00:20:26.461 Active Namespaces 00:20:26.461 ================= 00:20:26.461 Discovery Log Page 00:20:26.461 ================== 00:20:26.461 Generation Counter: 2 00:20:26.461 Number of Records: 2 00:20:26.461 Record Format: 0 00:20:26.461 00:20:26.461 Discovery Log Entry 0 00:20:26.461 ---------------------- 00:20:26.461 Transport Type: 1 (RDMA) 00:20:26.461 Address Family: 1 (IPv4) 00:20:26.461 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:26.461 Entry Flags: 00:20:26.461 Duplicate Returned Information: 0 00:20:26.461 Explicit Persistent Connection Support for Discovery: 0 00:20:26.461 Transport Requirements: 00:20:26.461 Secure Channel: Not Specified 00:20:26.461 Port ID: 1 (0x0001) 00:20:26.461 Controller ID: 65535 (0xffff) 00:20:26.461 Admin Max SQ Size: 32 00:20:26.461 Transport Service Identifier: 4420 00:20:26.461 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:26.461 Transport Address: 192.168.100.8 00:20:26.461 Transport Specific Address Subtype - RDMA 00:20:26.461 RDMA QP Service Type: 1 (Reliable Connected) 00:20:26.461 RDMA Provider Type: 1 (No provider specified) 00:20:26.461 RDMA CM Service: 1 (RDMA_CM) 00:20:26.461 Discovery Log Entry 1 00:20:26.461 ---------------------- 00:20:26.461 Transport Type: 1 (RDMA) 00:20:26.461 Address Family: 1 (IPv4) 00:20:26.461 Subsystem Type: 2 (NVM Subsystem) 00:20:26.461 Entry Flags: 00:20:26.461 Duplicate Returned Information: 0 00:20:26.461 Explicit Persistent Connection Support for Discovery: 0 00:20:26.461 Transport Requirements: 00:20:26.461 Secure Channel: Not Specified 00:20:26.461 Port ID: 1 (0x0001) 00:20:26.461 Controller ID: 65535 (0xffff) 00:20:26.461 Admin Max SQ Size: 32 00:20:26.461 Transport Service Identifier: 4420 00:20:26.461 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:26.461 Transport Address: 192.168.100.8 00:20:26.461 Transport Specific Address Subtype - RDMA 00:20:26.461 RDMA QP Service Type: 1 (Reliable Connected) 00:20:26.461 RDMA Provider Type: 1 (No provider specified) 00:20:26.461 RDMA CM Service: 1 (RDMA_CM) 00:20:26.461 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:26.461 get_feature(0x01) failed 00:20:26.461 get_feature(0x02) failed 00:20:26.461 get_feature(0x04) failed 00:20:26.461 ===================================================== 00:20:26.461 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:20:26.461 ===================================================== 00:20:26.461 Controller Capabilities/Features 00:20:26.461 ================================ 00:20:26.461 Vendor ID: 0000 00:20:26.461 Subsystem Vendor ID: 0000 00:20:26.461 Serial Number: 38ced5fa3d399021d0c8 00:20:26.461 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:26.461 Firmware Version: 6.8.9-20 00:20:26.461 Recommended Arb Burst: 6 00:20:26.461 IEEE OUI Identifier: 00 00 00 00:20:26.461 Multi-path I/O 00:20:26.461 May have multiple subsystem ports: Yes 00:20:26.461 May have multiple controllers: Yes 00:20:26.461 Associated with SR-IOV VF: No 00:20:26.461 Max Data Transfer Size: 1048576 00:20:26.461 Max Number of Namespaces: 1024 00:20:26.461 Max Number of I/O Queues: 128 00:20:26.461 NVMe Specification Version (VS): 1.3 00:20:26.461 NVMe Specification Version (Identify): 1.3 00:20:26.461 Maximum Queue Entries: 128 00:20:26.461 Contiguous Queues Required: No 00:20:26.461 Arbitration Mechanisms Supported 00:20:26.461 Weighted Round Robin: Not Supported 00:20:26.461 Vendor Specific: Not Supported 00:20:26.461 Reset Timeout: 7500 ms 00:20:26.461 Doorbell Stride: 4 bytes 00:20:26.461 NVM Subsystem Reset: Not Supported 00:20:26.461 Command Sets Supported 00:20:26.461 NVM Command Set: Supported 00:20:26.461 Boot Partition: Not Supported 00:20:26.461 Memory Page Size Minimum: 4096 bytes 00:20:26.461 Memory Page Size Maximum: 4096 bytes 00:20:26.461 Persistent Memory Region: Not Supported 00:20:26.461 Optional Asynchronous Events Supported 00:20:26.461 Namespace Attribute Notices: Supported 00:20:26.461 Firmware Activation Notices: Not Supported 00:20:26.461 ANA Change Notices: Supported 00:20:26.461 PLE Aggregate Log Change Notices: Not Supported 00:20:26.461 LBA Status Info Alert Notices: Not Supported 00:20:26.461 EGE Aggregate Log Change Notices: Not Supported 00:20:26.461 Normal NVM Subsystem Shutdown event: Not Supported 00:20:26.461 Zone Descriptor Change Notices: Not Supported 00:20:26.461 Discovery Log Change Notices: Not Supported 00:20:26.461 Controller Attributes 00:20:26.461 128-bit Host Identifier: Supported 00:20:26.461 Non-Operational Permissive Mode: Not Supported 00:20:26.461 NVM Sets: Not Supported 00:20:26.461 Read Recovery Levels: Not Supported 00:20:26.461 Endurance Groups: Not Supported 00:20:26.461 Predictable Latency Mode: Not Supported 00:20:26.461 Traffic Based Keep ALive: Supported 00:20:26.461 Namespace Granularity: Not Supported 00:20:26.462 SQ Associations: Not Supported 00:20:26.462 UUID List: Not Supported 00:20:26.462 Multi-Domain Subsystem: Not Supported 00:20:26.462 Fixed Capacity Management: Not Supported 00:20:26.462 Variable Capacity Management: Not Supported 00:20:26.462 Delete Endurance Group: Not Supported 00:20:26.462 Delete NVM Set: Not Supported 00:20:26.462 Extended LBA Formats Supported: Not Supported 00:20:26.462 Flexible Data Placement Supported: Not Supported 00:20:26.462 00:20:26.462 Controller Memory Buffer Support 00:20:26.462 ================================ 00:20:26.462 Supported: No 00:20:26.462 00:20:26.462 Persistent Memory Region Support 00:20:26.462 ================================ 00:20:26.462 Supported: No 00:20:26.462 00:20:26.462 Admin Command Set Attributes 00:20:26.462 ============================ 00:20:26.462 Security Send/Receive: Not Supported 00:20:26.462 Format NVM: Not Supported 00:20:26.462 Firmware Activate/Download: Not Supported 00:20:26.462 Namespace Management: Not Supported 00:20:26.462 Device Self-Test: Not Supported 00:20:26.462 Directives: Not Supported 00:20:26.462 NVMe-MI: Not Supported 00:20:26.462 Virtualization Management: Not Supported 00:20:26.462 Doorbell Buffer Config: Not Supported 00:20:26.462 Get LBA Status Capability: Not Supported 00:20:26.462 Command & Feature Lockdown Capability: Not Supported 00:20:26.462 Abort Command Limit: 4 00:20:26.462 Async Event Request Limit: 4 00:20:26.462 Number of Firmware Slots: N/A 00:20:26.462 Firmware Slot 1 Read-Only: N/A 00:20:26.462 Firmware Activation Without Reset: N/A 00:20:26.462 Multiple Update Detection Support: N/A 00:20:26.462 Firmware Update Granularity: No Information Provided 00:20:26.462 Per-Namespace SMART Log: Yes 00:20:26.462 Asymmetric Namespace Access Log Page: Supported 00:20:26.462 ANA Transition Time : 10 sec 00:20:26.462 00:20:26.462 Asymmetric Namespace Access Capabilities 00:20:26.462 ANA Optimized State : Supported 00:20:26.462 ANA Non-Optimized State : Supported 00:20:26.462 ANA Inaccessible State : Supported 00:20:26.462 ANA Persistent Loss State : Supported 00:20:26.462 ANA Change State : Supported 00:20:26.462 ANAGRPID is not changed : No 00:20:26.462 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:26.462 00:20:26.462 ANA Group Identifier Maximum : 128 00:20:26.462 Number of ANA Group Identifiers : 128 00:20:26.462 Max Number of Allowed Namespaces : 1024 00:20:26.462 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:26.462 Command Effects Log Page: Supported 00:20:26.462 Get Log Page Extended Data: Supported 00:20:26.462 Telemetry Log Pages: Not Supported 00:20:26.462 Persistent Event Log Pages: Not Supported 00:20:26.462 Supported Log Pages Log Page: May Support 00:20:26.462 Commands Supported & Effects Log Page: Not Supported 00:20:26.462 Feature Identifiers & Effects Log Page:May Support 00:20:26.462 NVMe-MI Commands & Effects Log Page: May Support 00:20:26.462 Data Area 4 for Telemetry Log: Not Supported 00:20:26.462 Error Log Page Entries Supported: 128 00:20:26.462 Keep Alive: Supported 00:20:26.462 Keep Alive Granularity: 1000 ms 00:20:26.462 00:20:26.462 NVM Command Set Attributes 00:20:26.462 ========================== 00:20:26.462 Submission Queue Entry Size 00:20:26.462 Max: 64 00:20:26.462 Min: 64 00:20:26.462 Completion Queue Entry Size 00:20:26.462 Max: 16 00:20:26.462 Min: 16 00:20:26.462 Number of Namespaces: 1024 00:20:26.462 Compare Command: Not Supported 00:20:26.462 Write Uncorrectable Command: Not Supported 00:20:26.462 Dataset Management Command: Supported 00:20:26.462 Write Zeroes Command: Supported 00:20:26.462 Set Features Save Field: Not Supported 00:20:26.462 Reservations: Not Supported 00:20:26.462 Timestamp: Not Supported 00:20:26.462 Copy: Not Supported 00:20:26.462 Volatile Write Cache: Present 00:20:26.462 Atomic Write Unit (Normal): 1 00:20:26.462 Atomic Write Unit (PFail): 1 00:20:26.462 Atomic Compare & Write Unit: 1 00:20:26.462 Fused Compare & Write: Not Supported 00:20:26.462 Scatter-Gather List 00:20:26.462 SGL Command Set: Supported 00:20:26.462 SGL Keyed: Supported 00:20:26.462 SGL Bit Bucket Descriptor: Not Supported 00:20:26.462 SGL Metadata Pointer: Not Supported 00:20:26.462 Oversized SGL: Not Supported 00:20:26.462 SGL Metadata Address: Not Supported 00:20:26.462 SGL Offset: Supported 00:20:26.462 Transport SGL Data Block: Not Supported 00:20:26.462 Replay Protected Memory Block: Not Supported 00:20:26.462 00:20:26.462 Firmware Slot Information 00:20:26.462 ========================= 00:20:26.462 Active slot: 0 00:20:26.462 00:20:26.462 Asymmetric Namespace Access 00:20:26.462 =========================== 00:20:26.462 Change Count : 0 00:20:26.462 Number of ANA Group Descriptors : 1 00:20:26.462 ANA Group Descriptor : 0 00:20:26.462 ANA Group ID : 1 00:20:26.462 Number of NSID Values : 1 00:20:26.462 Change Count : 0 00:20:26.462 ANA State : 1 00:20:26.462 Namespace Identifier : 1 00:20:26.462 00:20:26.462 Commands Supported and Effects 00:20:26.462 ============================== 00:20:26.462 Admin Commands 00:20:26.462 -------------- 00:20:26.462 Get Log Page (02h): Supported 00:20:26.462 Identify (06h): Supported 00:20:26.462 Abort (08h): Supported 00:20:26.462 Set Features (09h): Supported 00:20:26.462 Get Features (0Ah): Supported 00:20:26.462 Asynchronous Event Request (0Ch): Supported 00:20:26.462 Keep Alive (18h): Supported 00:20:26.462 I/O Commands 00:20:26.462 ------------ 00:20:26.462 Flush (00h): Supported 00:20:26.462 Write (01h): Supported LBA-Change 00:20:26.462 Read (02h): Supported 00:20:26.462 Write Zeroes (08h): Supported LBA-Change 00:20:26.462 Dataset Management (09h): Supported 00:20:26.462 00:20:26.462 Error Log 00:20:26.462 ========= 00:20:26.462 Entry: 0 00:20:26.462 Error Count: 0x3 00:20:26.462 Submission Queue Id: 0x0 00:20:26.462 Command Id: 0x5 00:20:26.462 Phase Bit: 0 00:20:26.462 Status Code: 0x2 00:20:26.462 Status Code Type: 0x0 00:20:26.462 Do Not Retry: 1 00:20:26.720 Error Location: 0x28 00:20:26.720 LBA: 0x0 00:20:26.720 Namespace: 0x0 00:20:26.720 Vendor Log Page: 0x0 00:20:26.720 ----------- 00:20:26.720 Entry: 1 00:20:26.720 Error Count: 0x2 00:20:26.720 Submission Queue Id: 0x0 00:20:26.720 Command Id: 0x5 00:20:26.720 Phase Bit: 0 00:20:26.720 Status Code: 0x2 00:20:26.720 Status Code Type: 0x0 00:20:26.720 Do Not Retry: 1 00:20:26.720 Error Location: 0x28 00:20:26.720 LBA: 0x0 00:20:26.720 Namespace: 0x0 00:20:26.720 Vendor Log Page: 0x0 00:20:26.720 ----------- 00:20:26.720 Entry: 2 00:20:26.720 Error Count: 0x1 00:20:26.720 Submission Queue Id: 0x0 00:20:26.720 Command Id: 0x0 00:20:26.720 Phase Bit: 0 00:20:26.720 Status Code: 0x2 00:20:26.720 Status Code Type: 0x0 00:20:26.720 Do Not Retry: 1 00:20:26.720 Error Location: 0x28 00:20:26.720 LBA: 0x0 00:20:26.720 Namespace: 0x0 00:20:26.720 Vendor Log Page: 0x0 00:20:26.720 00:20:26.720 Number of Queues 00:20:26.720 ================ 00:20:26.720 Number of I/O Submission Queues: 128 00:20:26.720 Number of I/O Completion Queues: 128 00:20:26.720 00:20:26.720 ZNS Specific Controller Data 00:20:26.720 ============================ 00:20:26.720 Zone Append Size Limit: 0 00:20:26.720 00:20:26.720 00:20:26.720 Active Namespaces 00:20:26.720 ================= 00:20:26.720 get_feature(0x05) failed 00:20:26.720 Namespace ID:1 00:20:26.720 Command Set Identifier: NVM (00h) 00:20:26.720 Deallocate: Supported 00:20:26.720 Deallocated/Unwritten Error: Not Supported 00:20:26.720 Deallocated Read Value: Unknown 00:20:26.720 Deallocate in Write Zeroes: Not Supported 00:20:26.720 Deallocated Guard Field: 0xFFFF 00:20:26.720 Flush: Supported 00:20:26.721 Reservation: Not Supported 00:20:26.721 Namespace Sharing Capabilities: Multiple Controllers 00:20:26.721 Size (in LBAs): 732585168 (349GiB) 00:20:26.721 Capacity (in LBAs): 732585168 (349GiB) 00:20:26.721 Utilization (in LBAs): 732585168 (349GiB) 00:20:26.721 UUID: 8ea79803-9a5f-4ab1-b673-390787d387ce 00:20:26.721 Thin Provisioning: Not Supported 00:20:26.721 Per-NS Atomic Units: Yes 00:20:26.721 Atomic Boundary Size (Normal): 0 00:20:26.721 Atomic Boundary Size (PFail): 0 00:20:26.721 Atomic Boundary Offset: 0 00:20:26.721 NGUID/EUI64 Never Reused: No 00:20:26.721 ANA group ID: 1 00:20:26.721 Namespace Write Protected: No 00:20:26.721 Number of LBA Formats: 1 00:20:26.721 Current LBA Format: LBA Format #00 00:20:26.721 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:26.721 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:20:26.721 rmmod nvme_rdma 00:20:26.721 rmmod nvme_fabrics 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # echo 0 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modprobe -r nvmet_rdma nvmet 00:20:26.721 17:45:04 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:20:30.006 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:30.006 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:30.006 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:30.006 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:30.006 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:20:30.006 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:30.263 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:30.263 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:30.263 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:20:30.263 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:30.263 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:20:30.263 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:20:30.263 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:20:30.263 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:20:30.263 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:20:30.264 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:20:30.264 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:20:30.264 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:20:30.264 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:20:30.522 00:20:30.522 real 0m16.916s 00:20:30.522 user 0m5.057s 00:20:30.522 sys 0m11.090s 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.522 ************************************ 00:20:30.522 END TEST nvmf_identify_kernel_target 00:20:30.522 ************************************ 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.522 ************************************ 00:20:30.522 START TEST nvmf_auth_host 00:20:30.522 ************************************ 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:20:30.522 * Looking for test storage... 00:20:30.522 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:30.522 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:30.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.781 --rc genhtml_branch_coverage=1 00:20:30.781 --rc genhtml_function_coverage=1 00:20:30.781 --rc genhtml_legend=1 00:20:30.781 --rc geninfo_all_blocks=1 00:20:30.781 --rc geninfo_unexecuted_blocks=1 00:20:30.781 00:20:30.781 ' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:30.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.781 --rc genhtml_branch_coverage=1 00:20:30.781 --rc genhtml_function_coverage=1 00:20:30.781 --rc genhtml_legend=1 00:20:30.781 --rc geninfo_all_blocks=1 00:20:30.781 --rc geninfo_unexecuted_blocks=1 00:20:30.781 00:20:30.781 ' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:30.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.781 --rc genhtml_branch_coverage=1 00:20:30.781 --rc genhtml_function_coverage=1 00:20:30.781 --rc genhtml_legend=1 00:20:30.781 --rc geninfo_all_blocks=1 00:20:30.781 --rc geninfo_unexecuted_blocks=1 00:20:30.781 00:20:30.781 ' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:30.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.781 --rc genhtml_branch_coverage=1 00:20:30.781 --rc genhtml_function_coverage=1 00:20:30.781 --rc genhtml_legend=1 00:20:30.781 --rc geninfo_all_blocks=1 00:20:30.781 --rc geninfo_unexecuted_blocks=1 00:20:30.781 00:20:30.781 ' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:30.781 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:30.781 17:45:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:20:30.781 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:20:30.782 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.782 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.782 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.782 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:20:30.782 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:20:30.782 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:20:30.782 17:45:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.347 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.347 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.347 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:20:37.348 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:20:37.348 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:20:37.348 Found net devices under 0000:18:00.0: mlx_0_0 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:20:37.348 Found net devices under 0000:18:00.1: mlx_0_1 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # is_hw=yes 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # rdma_device_init 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # uname 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe ib_cm 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe ib_core 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe ib_umad 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@70 -- # modprobe iw_cm 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # allocate_nic_ips 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # get_rdma_if_list 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:20:37.348 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:37.348 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:20:37.348 altname enp24s0f0np0 00:20:37.348 altname ens785f0np0 00:20:37.348 inet 192.168.100.8/24 scope global mlx_0_0 00:20:37.348 valid_lft forever preferred_lft forever 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:20:37.348 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:20:37.349 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:37.349 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:20:37.349 altname enp24s0f1np1 00:20:37.349 altname ens785f1np1 00:20:37.349 inet 192.168.100.9/24 scope global mlx_0_1 00:20:37.349 valid_lft forever preferred_lft forever 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # return 0 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # get_rdma_if_list 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_0 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@108 -- # echo mlx_0_1 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@109 -- # continue 2 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # awk '{print $4}' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # cut -d/ -f1 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:20:37.349 192.168.100.9' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:20:37.349 192.168.100.9' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # head -n 1 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:20:37.349 192.168.100.9' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # tail -n +2 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # head -n 1 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.349 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # nvmfpid=704165 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # waitforlisten 704165 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 704165 ']' 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.608 17:45:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b6a1d83ff722754d459cdea1ef451bf4 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.ZeL 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b6a1d83ff722754d459cdea1ef451bf4 0 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b6a1d83ff722754d459cdea1ef451bf4 0 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b6a1d83ff722754d459cdea1ef451bf4 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.ZeL 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.ZeL 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.ZeL 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:37.866 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=8257c5bf68a3e8c473c5176cd253aae3c66edb6a38d39da7e7a67702483d086f 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.awy 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 8257c5bf68a3e8c473c5176cd253aae3c66edb6a38d39da7e7a67702483d086f 3 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 8257c5bf68a3e8c473c5176cd253aae3c66edb6a38d39da7e7a67702483d086f 3 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=8257c5bf68a3e8c473c5176cd253aae3c66edb6a38d39da7e7a67702483d086f 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.awy 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.awy 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.awy 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=dd72f1c5d2d603734f24cab5b9dc599193f9213cdd00b531 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.JfG 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key dd72f1c5d2d603734f24cab5b9dc599193f9213cdd00b531 0 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 dd72f1c5d2d603734f24cab5b9dc599193f9213cdd00b531 0 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=dd72f1c5d2d603734f24cab5b9dc599193f9213cdd00b531 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.JfG 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.JfG 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.JfG 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:20:37.867 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=0d4e88c0f93f3ee5167cfc6c269d2485022ba2ae4992d657 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.1SZ 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 0d4e88c0f93f3ee5167cfc6c269d2485022ba2ae4992d657 2 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 0d4e88c0f93f3ee5167cfc6c269d2485022ba2ae4992d657 2 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=0d4e88c0f93f3ee5167cfc6c269d2485022ba2ae4992d657 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.1SZ 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.1SZ 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.1SZ 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=3720d2e6674f6b1957da6238b1b69770 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.EUC 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 3720d2e6674f6b1957da6238b1b69770 1 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 3720d2e6674f6b1957da6238b1b69770 1 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=3720d2e6674f6b1957da6238b1b69770 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.EUC 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.EUC 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.EUC 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha256 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=cec4ee25e9680cd5f192de110baa56e0 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.uGQ 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key cec4ee25e9680cd5f192de110baa56e0 1 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 cec4ee25e9680cd5f192de110baa56e0 1 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=cec4ee25e9680cd5f192de110baa56e0 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=1 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.uGQ 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.uGQ 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.uGQ 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha384 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=48 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=b60ee692e64695546e75700cacdd92f6ead4090f9c172e8b 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.IVc 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key b60ee692e64695546e75700cacdd92f6ead4090f9c172e8b 2 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 b60ee692e64695546e75700cacdd92f6ead4090f9c172e8b 2 00:20:38.125 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:20:38.126 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:38.126 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=b60ee692e64695546e75700cacdd92f6ead4090f9c172e8b 00:20:38.126 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=2 00:20:38.126 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:20:38.126 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.IVc 00:20:38.383 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.IVc 00:20:38.383 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.IVc 00:20:38.383 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=null 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=32 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=5c02bcfda451c6a4e9fb59b2bca8d03b 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.of3 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 5c02bcfda451c6a4e9fb59b2bca8d03b 0 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 5c02bcfda451c6a4e9fb59b2bca8d03b 0 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=5c02bcfda451c6a4e9fb59b2bca8d03b 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=0 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.of3 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.of3 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.of3 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # local digest len file key 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # local -A digests 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digest=sha512 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # len=64 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # key=25c2eea20fca6bfc9d85efe51fe6c1da33887009e6b99c2e42710825e85c9619 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.M1W 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # format_dhchap_key 25c2eea20fca6bfc9d85efe51fe6c1da33887009e6b99c2e42710825e85c9619 3 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # format_key DHHC-1 25c2eea20fca6bfc9d85efe51fe6c1da33887009e6b99c2e42710825e85c9619 3 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local prefix key digest 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # key=25c2eea20fca6bfc9d85efe51fe6c1da33887009e6b99c2e42710825e85c9619 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=3 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # python - 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.M1W 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.M1W 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.M1W 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 704165 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 704165 ']' 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.384 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZeL 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.awy ]] 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.awy 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JfG 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.1SZ ]] 00:20:38.642 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1SZ 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.EUC 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.uGQ ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.uGQ 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.IVc 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.of3 ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.of3 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.M1W 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # nvmet=/sys/kernel/config/nvmet 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # local block nvme 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # [[ ! -e /sys/module/nvmet ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # modprobe nvmet 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:38.643 17:45:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:20:42.034 Waiting for block devices as requested 00:20:42.034 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:20:42.292 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:20:42.292 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:42.549 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:42.549 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:42.549 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:42.549 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:42.807 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:42.807 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:42.807 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:43.065 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:20:43.065 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:20:43.065 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:20:43.345 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:20:43.345 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:20:43.345 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:20:43.602 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:20:43.602 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:20:43.602 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme0n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme0n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:44.559 No valid GPT data, bailing 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme0n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme1n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme1n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:20:44.559 No valid GPT data, bailing 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme1n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # for block in /sys/block/nvme* 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # [[ -e /sys/block/nvme2n1 ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # is_block_zoned nvme2n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # block_in_use nvme2n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme2n1 pt 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:20:44.559 No valid GPT data, bailing 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # nvme=/dev/nvme2n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # [[ -b /dev/nvme2n1 ]] 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@685 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # echo /dev/nvme2n1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 192.168.100.8 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo rdma 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 4420 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo ipv4 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@703 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c --hostid=800e967b-538f-e911-906e-001635649f5c -a 192.168.100.8 -t rdma -s 4420 00:20:44.559 00:20:44.559 Discovery Log Number of Records 2, Generation counter 2 00:20:44.559 =====Discovery Log Entry 0====== 00:20:44.559 trtype: rdma 00:20:44.559 adrfam: ipv4 00:20:44.559 subtype: current discovery subsystem 00:20:44.559 treq: not specified, sq flow control disable supported 00:20:44.559 portid: 1 00:20:44.559 trsvcid: 4420 00:20:44.559 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:44.559 traddr: 192.168.100.8 00:20:44.559 eflags: none 00:20:44.559 rdma_prtype: not specified 00:20:44.559 rdma_qptype: connected 00:20:44.559 rdma_cms: rdma-cm 00:20:44.559 rdma_pkey: 0x0000 00:20:44.559 =====Discovery Log Entry 1====== 00:20:44.559 trtype: rdma 00:20:44.559 adrfam: ipv4 00:20:44.559 subtype: nvme subsystem 00:20:44.559 treq: not specified, sq flow control disable supported 00:20:44.559 portid: 1 00:20:44.559 trsvcid: 4420 00:20:44.559 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:44.559 traddr: 192.168.100.8 00:20:44.559 eflags: none 00:20:44.559 rdma_prtype: not specified 00:20:44.559 rdma_qptype: connected 00:20:44.559 rdma_cms: rdma-cm 00:20:44.559 rdma_pkey: 0x0000 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.559 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.560 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.818 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.818 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:44.818 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:44.818 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:44.818 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.818 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.818 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:44.818 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:44.818 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:44.819 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:44.819 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:44.819 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.819 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.819 17:45:22 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.819 nvme0n1 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.819 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.078 nvme0n1 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:45.078 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:45.079 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:45.079 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.079 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.079 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.337 nvme0n1 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.337 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.338 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.597 nvme0n1 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:45.597 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.598 17:45:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.856 nvme0n1 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:45.856 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:45.857 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:45.857 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:45.857 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:45.857 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.857 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.114 nvme0n1 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:46.114 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.115 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.372 nvme0n1 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:46.372 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.373 nvme0n1 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.373 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:46.631 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:46.632 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:46.632 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.632 17:45:24 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.632 nvme0n1 00:20:46.632 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.632 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.632 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.632 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.632 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.890 nvme0n1 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.890 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:47.149 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.150 nvme0n1 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.150 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:47.408 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.409 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.668 nvme0n1 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.668 17:45:25 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.927 nvme0n1 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.927 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.186 nvme0n1 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.186 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.445 nvme0n1 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.445 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.704 nvme0n1 00:20:48.704 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.704 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.704 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.704 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.704 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.704 17:45:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.704 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.276 nvme0n1 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.276 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.534 nvme0n1 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.534 17:45:27 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.101 nvme0n1 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.101 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.359 nvme0n1 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.359 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.360 17:45:28 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.926 nvme0n1 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.926 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.927 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.493 nvme0n1 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.493 17:45:29 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.060 nvme0n1 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.060 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.626 nvme0n1 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.626 17:45:30 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.193 nvme0n1 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.193 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.452 17:45:31 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.019 nvme0n1 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:54.019 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.020 nvme0n1 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.020 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.279 nvme0n1 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:54.279 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:54.280 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.539 nvme0n1 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.539 17:45:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.798 nvme0n1 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.798 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.057 nvme0n1 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.057 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.316 nvme0n1 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.316 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.575 nvme0n1 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.575 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.576 17:45:33 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.834 nvme0n1 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.834 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.092 nvme0n1 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.092 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.350 nvme0n1 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.350 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.609 nvme0n1 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:56.609 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.610 17:45:34 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.868 nvme0n1 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.868 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.127 nvme0n1 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.127 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.385 nvme0n1 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.385 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.643 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:57.644 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:57.644 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:57.644 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:57.644 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:57.644 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:57.644 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.644 17:45:35 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.644 nvme0n1 00:20:57.644 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.644 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.644 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.644 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.644 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.644 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.902 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.160 nvme0n1 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.160 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.734 nvme0n1 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:58.734 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:58.735 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:58.735 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:58.735 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:58.735 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.735 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.735 17:45:36 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.995 nvme0n1 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.995 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.562 nvme0n1 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.562 17:45:37 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.821 nvme0n1 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:20:59.821 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:20:59.822 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.822 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.822 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.388 nvme0n1 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.388 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:00.389 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:00.647 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.647 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.647 17:45:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.905 nvme0n1 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.164 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.732 nvme0n1 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:01.732 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.733 17:45:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.300 nvme0n1 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.300 17:45:40 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.868 nvme0n1 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.868 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.869 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.128 nvme0n1 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:03.128 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.129 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.388 nvme0n1 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.388 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.647 nvme0n1 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.647 17:45:41 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.906 nvme0n1 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.906 nvme0n1 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.906 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.165 nvme0n1 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.165 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.423 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.424 nvme0n1 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.424 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.681 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.682 17:45:42 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.682 nvme0n1 00:21:04.682 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.682 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.682 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.682 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.682 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.682 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.939 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.939 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.939 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.939 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.939 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.939 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.939 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:04.939 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.940 nvme0n1 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.940 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.197 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.197 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.197 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:05.197 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.197 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.198 nvme0n1 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.198 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.456 nvme0n1 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.456 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.714 17:45:43 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.972 nvme0n1 00:21:05.972 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.972 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.972 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.972 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.972 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.972 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.972 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.972 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.973 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.232 nvme0n1 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.232 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.490 nvme0n1 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.490 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.491 17:45:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.749 nvme0n1 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.749 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.315 nvme0n1 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.315 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.574 nvme0n1 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.574 17:45:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.140 nvme0n1 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.140 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.398 nvme0n1 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:08.398 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:08.656 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:08.656 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.656 17:45:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.914 nvme0n1 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjZhMWQ4M2ZmNzIyNzU0ZDQ1OWNkZWExZWY0NTFiZjSmES8p: 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: ]] 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODI1N2M1YmY2OGEzZThjNDczYzUxNzZjZDI1M2FhZTNjNjZlZGI2YTM4ZDM5ZGE3ZTdhNjc3MDI0ODNkMDg2ZvnaJhk=: 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.914 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.479 nvme0n1 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.479 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:09.480 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:09.480 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:09.480 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:09.480 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:09.480 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.480 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.480 17:45:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 nvme0n1 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:21:10.044 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.045 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 nvme0n1 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:10.609 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YjYwZWU2OTJlNjQ2OTU1NDZlNzU3MDBjYWNkZDkyZjZlYWQ0MDkwZjljMTcyZThioGhtZw==: 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: ]] 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NWMwMmJjZmRhNDUxYzZhNGU5ZmI1OWIyYmNhOGQwM2JOe7ee: 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.610 17:45:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.175 nvme0n1 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.175 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.433 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.433 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.433 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVjMmVlYTIwZmNhNmJmYzlkODVlZmU1MWZlNmMxZGEzMzg4NzAwOWU2Yjk5YzJlNDI3MTA4MjVlODVjOTYxOTcLBKo=: 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.434 17:45:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.999 nvme0n1 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:11.999 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.000 request: 00:21:12.000 { 00:21:12.000 "name": "nvme0", 00:21:12.000 "trtype": "rdma", 00:21:12.000 "traddr": "192.168.100.8", 00:21:12.000 "adrfam": "ipv4", 00:21:12.000 "trsvcid": "4420", 00:21:12.000 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:12.000 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:12.000 "prchk_reftag": false, 00:21:12.000 "prchk_guard": false, 00:21:12.000 "hdgst": false, 00:21:12.000 "ddgst": false, 00:21:12.000 "allow_unrecognized_csi": false, 00:21:12.000 "method": "bdev_nvme_attach_controller", 00:21:12.000 "req_id": 1 00:21:12.000 } 00:21:12.000 Got JSON-RPC error response 00:21:12.000 response: 00:21:12.000 { 00:21:12.000 "code": -5, 00:21:12.000 "message": "Input/output error" 00:21:12.000 } 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.000 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.259 request: 00:21:12.259 { 00:21:12.259 "name": "nvme0", 00:21:12.259 "trtype": "rdma", 00:21:12.259 "traddr": "192.168.100.8", 00:21:12.259 "adrfam": "ipv4", 00:21:12.259 "trsvcid": "4420", 00:21:12.259 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:12.259 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:12.259 "prchk_reftag": false, 00:21:12.259 "prchk_guard": false, 00:21:12.259 "hdgst": false, 00:21:12.259 "ddgst": false, 00:21:12.259 "dhchap_key": "key2", 00:21:12.259 "allow_unrecognized_csi": false, 00:21:12.259 "method": "bdev_nvme_attach_controller", 00:21:12.259 "req_id": 1 00:21:12.259 } 00:21:12.259 Got JSON-RPC error response 00:21:12.259 response: 00:21:12.259 { 00:21:12.259 "code": -5, 00:21:12.259 "message": "Input/output error" 00:21:12.259 } 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.259 request: 00:21:12.259 { 00:21:12.259 "name": "nvme0", 00:21:12.259 "trtype": "rdma", 00:21:12.259 "traddr": "192.168.100.8", 00:21:12.259 "adrfam": "ipv4", 00:21:12.259 "trsvcid": "4420", 00:21:12.259 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:12.259 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:12.259 "prchk_reftag": false, 00:21:12.259 "prchk_guard": false, 00:21:12.259 "hdgst": false, 00:21:12.259 "ddgst": false, 00:21:12.259 "dhchap_key": "key1", 00:21:12.259 "dhchap_ctrlr_key": "ckey2", 00:21:12.259 "allow_unrecognized_csi": false, 00:21:12.259 "method": "bdev_nvme_attach_controller", 00:21:12.259 "req_id": 1 00:21:12.259 } 00:21:12.259 Got JSON-RPC error response 00:21:12.259 response: 00:21:12.259 { 00:21:12.259 "code": -5, 00:21:12.259 "message": "Input/output error" 00:21:12.259 } 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.259 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.516 nvme0n1 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.516 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.517 request: 00:21:12.517 { 00:21:12.517 "name": "nvme0", 00:21:12.517 "dhchap_key": "key1", 00:21:12.517 "dhchap_ctrlr_key": "ckey2", 00:21:12.517 "method": "bdev_nvme_set_keys", 00:21:12.517 "req_id": 1 00:21:12.517 } 00:21:12.517 Got JSON-RPC error response 00:21:12.517 response: 00:21:12.517 { 00:21:12.517 "code": -13, 00:21:12.517 "message": "Permission denied" 00:21:12.517 } 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:12.517 17:45:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:13.888 17:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.888 17:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:13.888 17:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.888 17:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.888 17:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.888 17:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:13.888 17:45:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:14.819 17:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.819 17:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.819 17:45:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:14.819 17:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.819 17:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.819 17:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:14.819 17:45:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGQ3MmYxYzVkMmQ2MDM3MzRmMjRjYWI1YjlkYzU5OTE5M2Y5MjEzY2RkMDBiNTMx9A3keg==: 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: ]] 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ0ZTg4YzBmOTNmM2VlNTE2N2NmYzZjMjY5ZDI0ODUwMjJiYTJhZTQ5OTJkNjU3R4Onpg==: 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@767 -- # local ip 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates=() 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # local -A ip_candidates 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z rdma ]] 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # ip=NVMF_FIRST_TARGET_IP 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # [[ -z 192.168.100.8 ]] 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@781 -- # echo 192.168.100.8 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.751 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.008 nvme0n1 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MzcyMGQyZTY2NzRmNmIxOTU3ZGE2MjM4YjFiNjk3NzBanqML: 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: ]] 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:Y2VjNGVlMjVlOTY4MGNkNWYxOTJkZTExMGJhYTU2ZTBgPwAs: 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:16.008 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.009 request: 00:21:16.009 { 00:21:16.009 "name": "nvme0", 00:21:16.009 "dhchap_key": "key2", 00:21:16.009 "dhchap_ctrlr_key": "ckey1", 00:21:16.009 "method": "bdev_nvme_set_keys", 00:21:16.009 "req_id": 1 00:21:16.009 } 00:21:16.009 Got JSON-RPC error response 00:21:16.009 response: 00:21:16.009 { 00:21:16.009 "code": -13, 00:21:16.009 "message": "Permission denied" 00:21:16.009 } 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:16.009 17:45:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:17.380 17:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:17.380 17:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:17.380 17:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:17.380 17:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.380 17:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:17.380 17:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:17.380 17:45:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:18.313 17:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:18.313 17:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:18.313 17:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:18.313 17:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.313 17:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:18.313 17:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:18.313 17:45:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:19.370 rmmod nvme_rdma 00:21:19.370 rmmod nvme_fabrics 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@515 -- # '[' -n 704165 ']' 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # killprocess 704165 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 704165 ']' 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 704165 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 704165 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 704165' 00:21:19.370 killing process with pid 704165 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 704165 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 704165 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:21:19.370 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # echo 0 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modules=(/sys/module/nvmet/holders/*) 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modprobe -r nvmet_rdma nvmet 00:21:19.641 17:45:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:22.921 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:af:00.0 (8086 2701): nvme -> vfio-pci 00:21:22.921 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:5e:00.0 (144d a80a): nvme -> vfio-pci 00:21:22.921 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:b0:00.0 (8086 2701): nvme -> vfio-pci 00:21:22.921 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:21:22.921 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:21:23.179 17:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.ZeL /tmp/spdk.key-null.JfG /tmp/spdk.key-sha256.EUC /tmp/spdk.key-sha384.IVc /tmp/spdk.key-sha512.M1W /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:21:23.179 17:46:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:21:26.461 0000:5e:00.0 (144d a80a): Already using the vfio-pci driver 00:21:26.461 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:af:00.0 (8086 2701): Already using the vfio-pci driver 00:21:26.461 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:b0:00.0 (8086 2701): Already using the vfio-pci driver 00:21:26.461 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:21:26.461 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:21:26.461 00:21:26.461 real 0m56.062s 00:21:26.461 user 0m46.951s 00:21:26.461 sys 0m15.839s 00:21:26.461 17:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:26.461 17:46:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.461 ************************************ 00:21:26.461 END TEST nvmf_auth_host 00:21:26.461 ************************************ 00:21:26.720 17:46:04 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:21:26.720 17:46:04 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:26.720 17:46:04 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:21:26.720 17:46:04 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:21:26.720 17:46:04 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:21:26.720 17:46:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:26.720 17:46:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:26.720 17:46:04 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.720 ************************************ 00:21:26.720 START TEST nvmf_bdevperf 00:21:26.720 ************************************ 00:21:26.720 17:46:04 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:21:26.720 * Looking for test storage... 00:21:26.720 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:26.720 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:26.720 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:26.720 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:26.980 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:26.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.981 --rc genhtml_branch_coverage=1 00:21:26.981 --rc genhtml_function_coverage=1 00:21:26.981 --rc genhtml_legend=1 00:21:26.981 --rc geninfo_all_blocks=1 00:21:26.981 --rc geninfo_unexecuted_blocks=1 00:21:26.981 00:21:26.981 ' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:26.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.981 --rc genhtml_branch_coverage=1 00:21:26.981 --rc genhtml_function_coverage=1 00:21:26.981 --rc genhtml_legend=1 00:21:26.981 --rc geninfo_all_blocks=1 00:21:26.981 --rc geninfo_unexecuted_blocks=1 00:21:26.981 00:21:26.981 ' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:26.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.981 --rc genhtml_branch_coverage=1 00:21:26.981 --rc genhtml_function_coverage=1 00:21:26.981 --rc genhtml_legend=1 00:21:26.981 --rc geninfo_all_blocks=1 00:21:26.981 --rc geninfo_unexecuted_blocks=1 00:21:26.981 00:21:26.981 ' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:26.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:26.981 --rc genhtml_branch_coverage=1 00:21:26.981 --rc genhtml_function_coverage=1 00:21:26.981 --rc genhtml_legend=1 00:21:26.981 --rc geninfo_all_blocks=1 00:21:26.981 --rc geninfo_unexecuted_blocks=1 00:21:26.981 00:21:26.981 ' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:26.981 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.981 17:46:05 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:21:33.539 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:33.539 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:21:33.539 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:33.540 Found net devices under 0000:18:00.0: mlx_0_0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:33.540 Found net devices under 0000:18:00.1: mlx_0_1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # is_hw=yes 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # rdma_device_init 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # uname 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@528 -- # allocate_nic_ips 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:33.540 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:33.540 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:21:33.540 altname enp24s0f0np0 00:21:33.540 altname ens785f0np0 00:21:33.540 inet 192.168.100.8/24 scope global mlx_0_0 00:21:33.540 valid_lft forever preferred_lft forever 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:33.540 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:33.540 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:21:33.540 altname enp24s0f1np1 00:21:33.540 altname ens785f1np1 00:21:33.540 inet 192.168.100.9/24 scope global mlx_0_1 00:21:33.540 valid_lft forever preferred_lft forever 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # return 0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@109 -- # continue 2 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:33.540 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:21:33.540 192.168.100.9' 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:21:33.541 192.168.100.9' 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # head -n 1 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:21:33.541 192.168.100.9' 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # tail -n +2 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # head -n 1 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:21:33.541 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=716627 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 716627 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 716627 ']' 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.798 17:46:11 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:33.798 [2024-10-17 17:46:12.007341] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:21:33.798 [2024-10-17 17:46:12.007406] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.798 [2024-10-17 17:46:12.082125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:33.798 [2024-10-17 17:46:12.125083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.798 [2024-10-17 17:46:12.125131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.798 [2024-10-17 17:46:12.125140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.798 [2024-10-17 17:46:12.125149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.798 [2024-10-17 17:46:12.125156] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.798 [2024-10-17 17:46:12.126389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.798 [2024-10-17 17:46:12.126489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.798 [2024-10-17 17:46:12.126491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:34.055 [2024-10-17 17:46:12.304291] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24c0ab0/0x24c4fa0) succeed. 00:21:34.055 [2024-10-17 17:46:12.314619] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24c20a0/0x2506640) succeed. 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:34.055 Malloc0 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.055 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:34.311 [2024-10-17 17:46:12.462598] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:21:34.311 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:21:34.312 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:34.312 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:34.312 { 00:21:34.312 "params": { 00:21:34.312 "name": "Nvme$subsystem", 00:21:34.312 "trtype": "$TEST_TRANSPORT", 00:21:34.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.312 "adrfam": "ipv4", 00:21:34.312 "trsvcid": "$NVMF_PORT", 00:21:34.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.312 "hdgst": ${hdgst:-false}, 00:21:34.312 "ddgst": ${ddgst:-false} 00:21:34.312 }, 00:21:34.312 "method": "bdev_nvme_attach_controller" 00:21:34.312 } 00:21:34.312 EOF 00:21:34.312 )") 00:21:34.312 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:21:34.312 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:21:34.312 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:21:34.312 17:46:12 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:34.312 "params": { 00:21:34.312 "name": "Nvme1", 00:21:34.312 "trtype": "rdma", 00:21:34.312 "traddr": "192.168.100.8", 00:21:34.312 "adrfam": "ipv4", 00:21:34.312 "trsvcid": "4420", 00:21:34.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.312 "hdgst": false, 00:21:34.312 "ddgst": false 00:21:34.312 }, 00:21:34.312 "method": "bdev_nvme_attach_controller" 00:21:34.312 }' 00:21:34.312 [2024-10-17 17:46:12.517730] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:21:34.312 [2024-10-17 17:46:12.517791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716808 ] 00:21:34.312 [2024-10-17 17:46:12.588250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.312 [2024-10-17 17:46:12.632706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.568 Running I/O for 1 seconds... 00:21:35.496 17792.00 IOPS, 69.50 MiB/s 00:21:35.496 Latency(us) 00:21:35.496 [2024-10-17T15:46:13.887Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.496 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:35.496 Verification LBA range: start 0x0 length 0x4000 00:21:35.496 Nvme1n1 : 1.01 17831.77 69.66 0.00 0.00 7135.91 2051.56 11226.60 00:21:35.496 [2024-10-17T15:46:13.887Z] =================================================================================================================== 00:21:35.496 [2024-10-17T15:46:13.887Z] Total : 17831.77 69.66 0.00 0.00 7135.91 2051.56 11226.60 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=716997 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config=() 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # local subsystem config 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:35.754 { 00:21:35.754 "params": { 00:21:35.754 "name": "Nvme$subsystem", 00:21:35.754 "trtype": "$TEST_TRANSPORT", 00:21:35.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.754 "adrfam": "ipv4", 00:21:35.754 "trsvcid": "$NVMF_PORT", 00:21:35.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.754 "hdgst": ${hdgst:-false}, 00:21:35.754 "ddgst": ${ddgst:-false} 00:21:35.754 }, 00:21:35.754 "method": "bdev_nvme_attach_controller" 00:21:35.754 } 00:21:35.754 EOF 00:21:35.754 )") 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@580 -- # cat 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # jq . 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@583 -- # IFS=, 00:21:35.754 17:46:14 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:35.754 "params": { 00:21:35.754 "name": "Nvme1", 00:21:35.754 "trtype": "rdma", 00:21:35.754 "traddr": "192.168.100.8", 00:21:35.754 "adrfam": "ipv4", 00:21:35.754 "trsvcid": "4420", 00:21:35.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:35.754 "hdgst": false, 00:21:35.754 "ddgst": false 00:21:35.754 }, 00:21:35.754 "method": "bdev_nvme_attach_controller" 00:21:35.754 }' 00:21:35.754 [2024-10-17 17:46:14.080875] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:21:35.754 [2024-10-17 17:46:14.080934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid716997 ] 00:21:36.011 [2024-10-17 17:46:14.152377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.011 [2024-10-17 17:46:14.194476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.011 Running I/O for 15 seconds... 00:21:38.308 17792.00 IOPS, 69.50 MiB/s [2024-10-17T15:46:17.262Z] 17902.00 IOPS, 69.93 MiB/s [2024-10-17T15:46:17.262Z] 17:46:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 716627 00:21:38.871 17:46:17 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:21:39.693 15839.33 IOPS, 61.87 MiB/s [2024-10-17T15:46:18.084Z] [2024-10-17 17:46:18.071920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c2000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.071961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.071980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043c0000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.071989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043be000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043bc000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ba000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b8000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b6000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b4000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b2000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043b0000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ae000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043ac000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043aa000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a8000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a6000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a4000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a2000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000043a0000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:118144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439e000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439c000 len:0x1000 key:0x181f00 00:21:39.693 [2024-10-17 17:46:18.072333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.693 [2024-10-17 17:46:18.072343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000439a000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004398000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004396000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004394000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004392000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004390000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438e000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438c000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000438a000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004388000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004386000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004384000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004382000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004380000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437e000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437c000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000437a000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004378000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004376000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004374000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004372000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004370000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436e000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436c000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000436a000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004368000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004366000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004364000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004362000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004360000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435e000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435c000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:118416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000435a000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.072987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.072998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004358000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.073010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.073020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004356000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.073029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.073039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004354000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.073048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.073058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004352000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.073067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.073077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004350000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.073086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.694 [2024-10-17 17:46:18.073097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434e000 len:0x1000 key:0x181f00 00:21:39.694 [2024-10-17 17:46:18.073106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434c000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000434a000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004348000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004346000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004344000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004342000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004340000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433e000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433c000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000433a000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004338000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004336000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004334000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004332000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004330000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432e000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432c000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000432a000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004328000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004326000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004324000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004322000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004320000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431e000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431c000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000431a000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004318000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004316000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004314000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:118704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004312000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004310000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430e000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430c000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000430a000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004308000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004306000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004304000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:118768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004302000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.695 [2024-10-17 17:46:18.073867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200004300000 len:0x1000 key:0x181f00 00:21:39.695 [2024-10-17 17:46:18.073876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.073886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.073895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.073905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:118792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.073914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.073924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.073933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.073943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.073953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.073963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.073972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.073983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.073992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:118832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:118848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:118896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:118920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:118952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.696 [2024-10-17 17:46:18.074396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.696 [2024-10-17 17:46:18.074407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.954 [2024-10-17 17:46:18.083592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.954 [2024-10-17 17:46:18.083611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.954 [2024-10-17 17:46:18.083624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:dcc01000 sqhd:7250 p:0 m:0 dnr:0 00:21:39.954 [2024-10-17 17:46:18.084892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:39.954 [2024-10-17 17:46:18.084909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:39.954 [2024-10-17 17:46:18.084920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119016 len:8 PRP1 0x0 PRP2 0x0 00:21:39.954 [2024-10-17 17:46:18.084933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:39.954 [2024-10-17 17:46:18.084984] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000168e4900 was disconnected and freed. reset controller. 00:21:39.954 [2024-10-17 17:46:18.085026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.954 [2024-10-17 17:46:18.085041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:22ee4d0 sqhd:33c0 p:0 m:0 dnr:0 00:21:39.954 [2024-10-17 17:46:18.085054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.954 [2024-10-17 17:46:18.085070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:22ee4d0 sqhd:33c0 p:0 m:0 dnr:0 00:21:39.954 [2024-10-17 17:46:18.085083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.954 [2024-10-17 17:46:18.085095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:22ee4d0 sqhd:33c0 p:0 m:0 dnr:0 00:21:39.954 [2024-10-17 17:46:18.085108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:39.954 [2024-10-17 17:46:18.085121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32766 cdw0:22ee4d0 sqhd:33c0 p:0 m:0 dnr:0 00:21:39.954 [2024-10-17 17:46:18.103921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:39.954 [2024-10-17 17:46:18.103947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:39.954 [2024-10-17 17:46:18.103975] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:39.954 [2024-10-17 17:46:18.106638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:39.954 [2024-10-17 17:46:18.109137] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:39.954 [2024-10-17 17:46:18.109165] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:39.954 [2024-10-17 17:46:18.109174] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed000 00:21:40.775 11879.50 IOPS, 46.40 MiB/s [2024-10-17T15:46:19.166Z] [2024-10-17 17:46:19.112715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:40.775 [2024-10-17 17:46:19.112775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:40.775 [2024-10-17 17:46:19.113151] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:40.775 [2024-10-17 17:46:19.113162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:40.775 [2024-10-17 17:46:19.113171] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:40.775 [2024-10-17 17:46:19.115886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:40.775 [2024-10-17 17:46:19.120136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:40.775 [2024-10-17 17:46:19.122300] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:40.775 [2024-10-17 17:46:19.122322] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:40.775 [2024-10-17 17:46:19.122330] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed000 00:21:41.962 9503.60 IOPS, 37.12 MiB/s [2024-10-17T15:46:20.353Z] /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 716627 Killed "${NVMF_APP[@]}" "$@" 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # nvmfpid=717732 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # waitforlisten 717732 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 717732 ']' 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.962 [2024-10-17 17:46:20.102614] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:21:41.962 [2024-10-17 17:46:20.102669] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.962 [2024-10-17 17:46:20.125900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:41.962 [2024-10-17 17:46:20.125926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.962 [2024-10-17 17:46:20.126105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:41.962 [2024-10-17 17:46:20.126116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:41.962 [2024-10-17 17:46:20.126128] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:41.962 [2024-10-17 17:46:20.128888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:41.962 [2024-10-17 17:46:20.134046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:41.962 [2024-10-17 17:46:20.136349] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:41.962 [2024-10-17 17:46:20.136371] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:41.962 [2024-10-17 17:46:20.136380] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000168ed000 00:21:41.962 [2024-10-17 17:46:20.176588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.962 [2024-10-17 17:46:20.220398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.962 [2024-10-17 17:46:20.220437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.962 [2024-10-17 17:46:20.220446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.962 [2024-10-17 17:46:20.220455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.962 [2024-10-17 17:46:20.220462] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.962 [2024-10-17 17:46:20.221563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.962 [2024-10-17 17:46:20.221641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.962 [2024-10-17 17:46:20.221643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:41.962 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:42.220 [2024-10-17 17:46:20.395172] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18cbab0/0x18cffa0) succeed. 00:21:42.220 7919.67 IOPS, 30.94 MiB/s [2024-10-17T15:46:20.611Z] [2024-10-17 17:46:20.405520] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18cd0a0/0x1911640) succeed. 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:42.220 Malloc0 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:42.220 [2024-10-17 17:46:20.556178] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.220 17:46:20 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 716997 00:21:42.783 [2024-10-17 17:46:21.139905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:42.783 [2024-10-17 17:46:21.139935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.783 [2024-10-17 17:46:21.140113] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:42.783 [2024-10-17 17:46:21.140124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:42.783 [2024-10-17 17:46:21.140135] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:42.783 [2024-10-17 17:46:21.140806] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.783 [2024-10-17 17:46:21.142910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.783 [2024-10-17 17:46:21.153803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.038 [2024-10-17 17:46:21.194761] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:44.404 7304.00 IOPS, 28.53 MiB/s [2024-10-17T15:46:23.726Z] 8621.75 IOPS, 33.68 MiB/s [2024-10-17T15:46:24.656Z] 9649.44 IOPS, 37.69 MiB/s [2024-10-17T15:46:25.586Z] 10467.80 IOPS, 40.89 MiB/s [2024-10-17T15:46:26.517Z] 11140.27 IOPS, 43.52 MiB/s [2024-10-17T15:46:27.447Z] 11698.42 IOPS, 45.70 MiB/s [2024-10-17T15:46:28.815Z] 12173.54 IOPS, 47.55 MiB/s [2024-10-17T15:46:29.746Z] 12579.29 IOPS, 49.14 MiB/s [2024-10-17T15:46:29.746Z] 12932.13 IOPS, 50.52 MiB/s 00:21:51.355 Latency(us) 00:21:51.355 [2024-10-17T15:46:29.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.355 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:51.355 Verification LBA range: start 0x0 length 0x4000 00:21:51.355 Nvme1n1 : 15.00 12932.39 50.52 10334.05 0.00 5482.39 373.98 1072282.94 00:21:51.355 [2024-10-17T15:46:29.746Z] =================================================================================================================== 00:21:51.355 [2024-10-17T15:46:29.746Z] Total : 12932.39 50.52 10334.05 0.00 5482.39 373.98 1072282.94 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:21:51.355 rmmod nvme_rdma 00:21:51.355 rmmod nvme_fabrics 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@515 -- # '[' -n 717732 ']' 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # killprocess 717732 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 717732 ']' 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 717732 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 717732 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 717732' 00:21:51.355 killing process with pid 717732 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 717732 00:21:51.355 17:46:29 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 717732 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:21:51.921 00:21:51.921 real 0m25.065s 00:21:51.921 user 1m2.701s 00:21:51.921 sys 0m6.342s 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:51.921 ************************************ 00:21:51.921 END TEST nvmf_bdevperf 00:21:51.921 ************************************ 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.921 ************************************ 00:21:51.921 START TEST nvmf_target_disconnect 00:21:51.921 ************************************ 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:21:51.921 * Looking for test storage... 00:21:51.921 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.921 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:51.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.921 --rc genhtml_branch_coverage=1 00:21:51.921 --rc genhtml_function_coverage=1 00:21:51.922 --rc genhtml_legend=1 00:21:51.922 --rc geninfo_all_blocks=1 00:21:51.922 --rc geninfo_unexecuted_blocks=1 00:21:51.922 00:21:51.922 ' 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:51.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.922 --rc genhtml_branch_coverage=1 00:21:51.922 --rc genhtml_function_coverage=1 00:21:51.922 --rc genhtml_legend=1 00:21:51.922 --rc geninfo_all_blocks=1 00:21:51.922 --rc geninfo_unexecuted_blocks=1 00:21:51.922 00:21:51.922 ' 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:51.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.922 --rc genhtml_branch_coverage=1 00:21:51.922 --rc genhtml_function_coverage=1 00:21:51.922 --rc genhtml_legend=1 00:21:51.922 --rc geninfo_all_blocks=1 00:21:51.922 --rc geninfo_unexecuted_blocks=1 00:21:51.922 00:21:51.922 ' 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:51.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.922 --rc genhtml_branch_coverage=1 00:21:51.922 --rc genhtml_function_coverage=1 00:21:51.922 --rc genhtml_legend=1 00:21:51.922 --rc geninfo_all_blocks=1 00:21:51.922 --rc geninfo_unexecuted_blocks=1 00:21:51.922 00:21:51.922 ' 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.922 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:52.180 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:21:52.180 17:46:30 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:21:58.741 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:21:58.741 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:21:58.741 Found net devices under 0000:18:00.0: mlx_0_0 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:58.741 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:21:58.742 Found net devices under 0000:18:00.1: mlx_0_1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # rdma_device_init 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # uname 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe ib_cm 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe ib_core 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe ib_umad 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@70 -- # modprobe iw_cm 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@528 -- # allocate_nic_ips 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # get_rdma_if_list 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:21:58.742 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:58.742 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:21:58.742 altname enp24s0f0np0 00:21:58.742 altname ens785f0np0 00:21:58.742 inet 192.168.100.8/24 scope global mlx_0_0 00:21:58.742 valid_lft forever preferred_lft forever 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:21:58.742 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:58.742 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:21:58.742 altname enp24s0f1np1 00:21:58.742 altname ens785f1np1 00:21:58.742 inet 192.168.100.9/24 scope global mlx_0_1 00:21:58.742 valid_lft forever preferred_lft forever 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # return 0 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # get_rdma_if_list 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_0 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@108 -- # echo mlx_0_1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@109 -- # continue 2 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # awk '{print $4}' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # cut -d/ -f1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:21:58.742 192.168.100.9' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:21:58.742 192.168.100.9' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # head -n 1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # tail -n +2 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:21:58.742 192.168.100.9' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # head -n 1 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:21:58.742 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:58.743 ************************************ 00:21:58.743 START TEST nvmf_target_disconnect_tc1 00:21:58.743 ************************************ 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:21:58.743 17:46:36 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:58.743 [2024-10-17 17:46:36.896859] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:21:58.743 [2024-10-17 17:46:36.896912] nvme_rdma.c:1088:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:21:58.743 [2024-10-17 17:46:36.896927] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7000 00:21:59.673 [2024-10-17 17:46:37.900374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:59.673 [2024-10-17 17:46:37.900447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:21:59.673 [2024-10-17 17:46:37.900481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:21:59.673 [2024-10-17 17:46:37.900547] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:59.673 [2024-10-17 17:46:37.900577] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:21:59.673 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:21:59.673 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:21:59.673 Initializing NVMe Controllers 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:59.673 00:21:59.673 real 0m1.141s 00:21:59.673 user 0m0.878s 00:21:59.673 sys 0m0.255s 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:59.673 ************************************ 00:21:59.673 END TEST nvmf_target_disconnect_tc1 00:21:59.673 ************************************ 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:21:59.673 ************************************ 00:21:59.673 START TEST nvmf_target_disconnect_tc2 00:21:59.673 ************************************ 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=722160 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 722160 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:21:59.673 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 722160 ']' 00:21:59.674 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.674 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.674 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.674 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.674 17:46:37 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.674 [2024-10-17 17:46:38.041963] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:21:59.674 [2024-10-17 17:46:38.042017] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.931 [2024-10-17 17:46:38.127363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.931 [2024-10-17 17:46:38.172272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.931 [2024-10-17 17:46:38.172318] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.931 [2024-10-17 17:46:38.172328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:59.931 [2024-10-17 17:46:38.172337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:59.931 [2024-10-17 17:46:38.172344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.931 [2024-10-17 17:46:38.173844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:59.931 [2024-10-17 17:46:38.173878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:59.931 [2024-10-17 17:46:38.173977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:59.931 [2024-10-17 17:46:38.173978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:00.864 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:00.864 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:00.864 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.865 Malloc0 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.865 17:46:38 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.865 [2024-10-17 17:46:39.005506] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x67d650/0x689070) succeed. 00:22:00.865 [2024-10-17 17:46:39.016283] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x67ece0/0x6ca710) succeed. 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.865 [2024-10-17 17:46:39.166889] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=722364 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:22:00.865 17:46:39 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:22:03.391 17:46:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 722160 00:22:03.391 17:46:41 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Read completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 Write completed with error (sct=0, sc=8) 00:22:04.325 starting I/O failed 00:22:04.325 [2024-10-17 17:46:42.365933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:05.009 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 722160 Killed "${NVMF_APP[@]}" "$@" 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # nvmfpid=722915 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # waitforlisten 722915 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 722915 ']' 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:05.009 17:46:43 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.009 [2024-10-17 17:46:43.249660] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:22:05.009 [2024-10-17 17:46:43.249720] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.009 [2024-10-17 17:46:43.336147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Write completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 Read completed with error (sct=0, sc=8) 00:22:05.009 starting I/O failed 00:22:05.009 [2024-10-17 17:46:43.370410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:05.009 [2024-10-17 17:46:43.381918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.009 [2024-10-17 17:46:43.381952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.009 [2024-10-17 17:46:43.381962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.010 [2024-10-17 17:46:43.381971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.010 [2024-10-17 17:46:43.381978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.010 [2024-10-17 17:46:43.383458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:05.010 [2024-10-17 17:46:43.383589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:05.010 [2024-10-17 17:46:43.383490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:05.010 [2024-10-17 17:46:43.383591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.943 Malloc0 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:05.943 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:05.943 [2024-10-17 17:46:44.204235] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc8a650/0xc96070) succeed. 00:22:05.943 [2024-10-17 17:46:44.214994] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc8bce0/0xcd7710) succeed. 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.202 [2024-10-17 17:46:44.364594] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Read completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 Write completed with error (sct=0, sc=8) 00:22:06.202 starting I/O failed 00:22:06.202 [2024-10-17 17:46:44.374963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:06.202 17:46:44 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 722364 00:22:06.202 [2024-10-17 17:46:44.378258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.202 [2024-10-17 17:46:44.378305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.202 [2024-10-17 17:46:44.378326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.202 [2024-10-17 17:46:44.378336] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.202 [2024-10-17 17:46:44.378346] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.202 [2024-10-17 17:46:44.388274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.202 qpair failed and we were unable to recover it. 00:22:06.202 [2024-10-17 17:46:44.398254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.202 [2024-10-17 17:46:44.398293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.202 [2024-10-17 17:46:44.398312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.202 [2024-10-17 17:46:44.398322] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.202 [2024-10-17 17:46:44.398331] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.202 [2024-10-17 17:46:44.408223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.202 qpair failed and we were unable to recover it. 00:22:06.202 [2024-10-17 17:46:44.418185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.202 [2024-10-17 17:46:44.418233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.202 [2024-10-17 17:46:44.418252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.202 [2024-10-17 17:46:44.418262] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.202 [2024-10-17 17:46:44.418271] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.202 [2024-10-17 17:46:44.428366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.202 qpair failed and we were unable to recover it. 00:22:06.202 [2024-10-17 17:46:44.438253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.202 [2024-10-17 17:46:44.438299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.202 [2024-10-17 17:46:44.438321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.202 [2024-10-17 17:46:44.438330] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.202 [2024-10-17 17:46:44.438339] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.202 [2024-10-17 17:46:44.448440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.202 qpair failed and we were unable to recover it. 00:22:06.202 [2024-10-17 17:46:44.458279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.202 [2024-10-17 17:46:44.458322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.202 [2024-10-17 17:46:44.458340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.202 [2024-10-17 17:46:44.458350] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.202 [2024-10-17 17:46:44.458359] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.202 [2024-10-17 17:46:44.468452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.202 qpair failed and we were unable to recover it. 00:22:06.202 [2024-10-17 17:46:44.478369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.202 [2024-10-17 17:46:44.478414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.202 [2024-10-17 17:46:44.478438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.202 [2024-10-17 17:46:44.478448] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.203 [2024-10-17 17:46:44.478457] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.203 [2024-10-17 17:46:44.488483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.203 qpair failed and we were unable to recover it. 00:22:06.203 [2024-10-17 17:46:44.498427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.203 [2024-10-17 17:46:44.498469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.203 [2024-10-17 17:46:44.498487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.203 [2024-10-17 17:46:44.498497] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.203 [2024-10-17 17:46:44.498506] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.203 [2024-10-17 17:46:44.508586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.203 qpair failed and we were unable to recover it. 00:22:06.203 [2024-10-17 17:46:44.518521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.203 [2024-10-17 17:46:44.518564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.203 [2024-10-17 17:46:44.518583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.203 [2024-10-17 17:46:44.518593] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.203 [2024-10-17 17:46:44.518602] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.203 [2024-10-17 17:46:44.528640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.203 qpair failed and we were unable to recover it. 00:22:06.203 [2024-10-17 17:46:44.538501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.203 [2024-10-17 17:46:44.538544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.203 [2024-10-17 17:46:44.538563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.203 [2024-10-17 17:46:44.538573] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.203 [2024-10-17 17:46:44.538582] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.203 [2024-10-17 17:46:44.548826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.203 qpair failed and we were unable to recover it. 00:22:06.203 [2024-10-17 17:46:44.558704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.203 [2024-10-17 17:46:44.558747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.203 [2024-10-17 17:46:44.558766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.203 [2024-10-17 17:46:44.558776] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.203 [2024-10-17 17:46:44.558785] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.203 [2024-10-17 17:46:44.568724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.203 qpair failed and we were unable to recover it. 00:22:06.203 [2024-10-17 17:46:44.578629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.203 [2024-10-17 17:46:44.578672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.203 [2024-10-17 17:46:44.578690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.203 [2024-10-17 17:46:44.578699] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.203 [2024-10-17 17:46:44.578709] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.203 [2024-10-17 17:46:44.588823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.203 qpair failed and we were unable to recover it. 00:22:06.461 [2024-10-17 17:46:44.598721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.461 [2024-10-17 17:46:44.598768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.461 [2024-10-17 17:46:44.598786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.461 [2024-10-17 17:46:44.598796] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.461 [2024-10-17 17:46:44.598804] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.461 [2024-10-17 17:46:44.608824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.461 qpair failed and we were unable to recover it. 00:22:06.461 [2024-10-17 17:46:44.618864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.461 [2024-10-17 17:46:44.618907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.461 [2024-10-17 17:46:44.618926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.461 [2024-10-17 17:46:44.618935] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.461 [2024-10-17 17:46:44.618944] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.461 [2024-10-17 17:46:44.628913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.461 qpair failed and we were unable to recover it. 00:22:06.461 [2024-10-17 17:46:44.638824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.638868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.638886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.638895] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.638904] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.649087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.658866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.658907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.658925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.658935] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.658943] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.669128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.679029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.679071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.679089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.679099] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.679108] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.689188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.699149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.699190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.699208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.699221] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.699230] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.709139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.719124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.719165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.719184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.719193] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.719202] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.729226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.739049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.739090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.739109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.739118] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.739127] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.749184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.759161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.759203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.759221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.759231] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.759240] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.769387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.779150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.779192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.779210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.779220] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.779229] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.789261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.799274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.799314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.799332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.799342] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.799351] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.809592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.819389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.819440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.819459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.819468] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.819477] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.829569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.462 [2024-10-17 17:46:44.839489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.462 [2024-10-17 17:46:44.839531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.462 [2024-10-17 17:46:44.839550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.462 [2024-10-17 17:46:44.839559] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.462 [2024-10-17 17:46:44.839568] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.462 [2024-10-17 17:46:44.849652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.462 qpair failed and we were unable to recover it. 00:22:06.720 [2024-10-17 17:46:44.859520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:44.859569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:44.859588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:44.859597] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:44.859606] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:44.869582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:44.879504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:44.879546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:44.879567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:44.879577] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:44.879585] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:44.889658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:44.899484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:44.899529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:44.899548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:44.899557] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:44.899566] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:44.909873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:44.919590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:44.919640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:44.919659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:44.919669] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:44.919678] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:44.929729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:44.939708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:44.939750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:44.939769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:44.939779] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:44.939787] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:44.949831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:44.959731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:44.959769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:44.959787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:44.959796] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:44.959805] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:44.969890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:44.979758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:44.979804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:44.979823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:44.979833] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:44.979841] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:44.989942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:44.999817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:44.999858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:44.999877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:44.999886] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:44.999895] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:45.010150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:45.019889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:45.019930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:45.019949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:45.019958] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:45.019967] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:45.030079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:45.039999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:45.040042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:45.040060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:45.040076] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:45.040085] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:45.050106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:45.059991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:45.060036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:45.060054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:45.060063] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:45.060072] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:45.070319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:45.080039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:45.080082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:45.080100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:45.080110] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:45.080118] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:45.090163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.721 [2024-10-17 17:46:45.100104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.721 [2024-10-17 17:46:45.100150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.721 [2024-10-17 17:46:45.100168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.721 [2024-10-17 17:46:45.100178] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.721 [2024-10-17 17:46:45.100186] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.721 [2024-10-17 17:46:45.110276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.721 qpair failed and we were unable to recover it. 00:22:06.980 [2024-10-17 17:46:45.120212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.980 [2024-10-17 17:46:45.120259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.980 [2024-10-17 17:46:45.120278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.980 [2024-10-17 17:46:45.120287] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.980 [2024-10-17 17:46:45.120296] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.980 [2024-10-17 17:46:45.130348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.980 qpair failed and we were unable to recover it. 00:22:06.980 [2024-10-17 17:46:45.140158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.980 [2024-10-17 17:46:45.140194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.980 [2024-10-17 17:46:45.140212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.980 [2024-10-17 17:46:45.140222] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.980 [2024-10-17 17:46:45.140234] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.980 [2024-10-17 17:46:45.150468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.980 qpair failed and we were unable to recover it. 00:22:06.980 [2024-10-17 17:46:45.160322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.980 [2024-10-17 17:46:45.160363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.980 [2024-10-17 17:46:45.160381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.980 [2024-10-17 17:46:45.160391] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.980 [2024-10-17 17:46:45.160400] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.980 [2024-10-17 17:46:45.170547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.980 qpair failed and we were unable to recover it. 00:22:06.980 [2024-10-17 17:46:45.180267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.980 [2024-10-17 17:46:45.180308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.980 [2024-10-17 17:46:45.180326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.980 [2024-10-17 17:46:45.180336] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.980 [2024-10-17 17:46:45.180345] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.980 [2024-10-17 17:46:45.190496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.980 qpair failed and we were unable to recover it. 00:22:06.980 [2024-10-17 17:46:45.200441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.980 [2024-10-17 17:46:45.200482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.980 [2024-10-17 17:46:45.200501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.980 [2024-10-17 17:46:45.200510] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.980 [2024-10-17 17:46:45.200519] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.980 [2024-10-17 17:46:45.210597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.980 qpair failed and we were unable to recover it. 00:22:06.980 [2024-10-17 17:46:45.220506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.980 [2024-10-17 17:46:45.220544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.980 [2024-10-17 17:46:45.220563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.980 [2024-10-17 17:46:45.220573] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.980 [2024-10-17 17:46:45.220581] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.980 [2024-10-17 17:46:45.230600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.980 qpair failed and we were unable to recover it. 00:22:06.980 [2024-10-17 17:46:45.240527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.980 [2024-10-17 17:46:45.240567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.980 [2024-10-17 17:46:45.240586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.980 [2024-10-17 17:46:45.240595] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.980 [2024-10-17 17:46:45.240604] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.980 [2024-10-17 17:46:45.250716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.980 qpair failed and we were unable to recover it. 00:22:06.980 [2024-10-17 17:46:45.260498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.980 [2024-10-17 17:46:45.260538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.980 [2024-10-17 17:46:45.260555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.980 [2024-10-17 17:46:45.260565] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.981 [2024-10-17 17:46:45.260573] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.981 [2024-10-17 17:46:45.270581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.981 qpair failed and we were unable to recover it. 00:22:06.981 [2024-10-17 17:46:45.280665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.981 [2024-10-17 17:46:45.280705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.981 [2024-10-17 17:46:45.280724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.981 [2024-10-17 17:46:45.280734] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.981 [2024-10-17 17:46:45.280743] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.981 [2024-10-17 17:46:45.291038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.981 qpair failed and we were unable to recover it. 00:22:06.981 [2024-10-17 17:46:45.300605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.981 [2024-10-17 17:46:45.300641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.981 [2024-10-17 17:46:45.300659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.981 [2024-10-17 17:46:45.300669] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.981 [2024-10-17 17:46:45.300678] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.981 [2024-10-17 17:46:45.310862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.981 qpair failed and we were unable to recover it. 00:22:06.981 [2024-10-17 17:46:45.320778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.981 [2024-10-17 17:46:45.320820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.981 [2024-10-17 17:46:45.320844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.981 [2024-10-17 17:46:45.320854] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.981 [2024-10-17 17:46:45.320863] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.981 [2024-10-17 17:46:45.330852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.981 qpair failed and we were unable to recover it. 00:22:06.981 [2024-10-17 17:46:45.340764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.981 [2024-10-17 17:46:45.340804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.981 [2024-10-17 17:46:45.340822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.981 [2024-10-17 17:46:45.340832] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.981 [2024-10-17 17:46:45.340840] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:06.981 [2024-10-17 17:46:45.350929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.981 qpair failed and we were unable to recover it. 00:22:06.981 [2024-10-17 17:46:45.360895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:06.981 [2024-10-17 17:46:45.360935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:06.981 [2024-10-17 17:46:45.360953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:06.981 [2024-10-17 17:46:45.360962] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:06.981 [2024-10-17 17:46:45.360972] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.239 [2024-10-17 17:46:45.371079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.239 qpair failed and we were unable to recover it. 00:22:07.239 [2024-10-17 17:46:45.380919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.239 [2024-10-17 17:46:45.380965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.239 [2024-10-17 17:46:45.380983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.239 [2024-10-17 17:46:45.380992] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.239 [2024-10-17 17:46:45.381001] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.239 [2024-10-17 17:46:45.391142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.239 qpair failed and we were unable to recover it. 00:22:07.239 [2024-10-17 17:46:45.401074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.239 [2024-10-17 17:46:45.401117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.239 [2024-10-17 17:46:45.401134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.239 [2024-10-17 17:46:45.401144] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.239 [2024-10-17 17:46:45.401153] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.239 [2024-10-17 17:46:45.410966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.239 qpair failed and we were unable to recover it. 00:22:07.239 [2024-10-17 17:46:45.421018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.239 [2024-10-17 17:46:45.421064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.239 [2024-10-17 17:46:45.421082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.239 [2024-10-17 17:46:45.421092] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.239 [2024-10-17 17:46:45.421101] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.239 [2024-10-17 17:46:45.431284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.239 qpair failed and we were unable to recover it. 00:22:07.239 [2024-10-17 17:46:45.441101] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.239 [2024-10-17 17:46:45.441143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.239 [2024-10-17 17:46:45.441162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.441171] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.441180] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.240 [2024-10-17 17:46:45.451139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.240 qpair failed and we were unable to recover it. 00:22:07.240 [2024-10-17 17:46:45.461054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.240 [2024-10-17 17:46:45.461092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.240 [2024-10-17 17:46:45.461110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.461120] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.461129] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.240 [2024-10-17 17:46:45.471297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.240 qpair failed and we were unable to recover it. 00:22:07.240 [2024-10-17 17:46:45.481246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.240 [2024-10-17 17:46:45.481289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.240 [2024-10-17 17:46:45.481307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.481317] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.481326] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.240 [2024-10-17 17:46:45.491366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.240 qpair failed and we were unable to recover it. 00:22:07.240 [2024-10-17 17:46:45.501328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.240 [2024-10-17 17:46:45.501370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.240 [2024-10-17 17:46:45.501391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.501401] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.501409] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.240 [2024-10-17 17:46:45.511544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.240 qpair failed and we were unable to recover it. 00:22:07.240 [2024-10-17 17:46:45.521413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.240 [2024-10-17 17:46:45.521455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.240 [2024-10-17 17:46:45.521473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.521483] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.521492] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.240 [2024-10-17 17:46:45.531600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.240 qpair failed and we were unable to recover it. 00:22:07.240 [2024-10-17 17:46:45.541460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.240 [2024-10-17 17:46:45.541502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.240 [2024-10-17 17:46:45.541520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.541530] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.541539] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.240 [2024-10-17 17:46:45.551738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.240 qpair failed and we were unable to recover it. 00:22:07.240 [2024-10-17 17:46:45.561436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.240 [2024-10-17 17:46:45.561478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.240 [2024-10-17 17:46:45.561496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.561506] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.561515] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.240 [2024-10-17 17:46:45.571652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.240 qpair failed and we were unable to recover it. 00:22:07.240 [2024-10-17 17:46:45.581517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.240 [2024-10-17 17:46:45.581562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.240 [2024-10-17 17:46:45.581580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.581590] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.581602] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.240 [2024-10-17 17:46:45.591749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.240 qpair failed and we were unable to recover it. 00:22:07.240 [2024-10-17 17:46:45.601643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.240 [2024-10-17 17:46:45.601686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.240 [2024-10-17 17:46:45.601705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.601716] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.601726] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.240 [2024-10-17 17:46:45.611733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.240 qpair failed and we were unable to recover it. 00:22:07.240 [2024-10-17 17:46:45.621645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.240 [2024-10-17 17:46:45.621686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.240 [2024-10-17 17:46:45.621705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.240 [2024-10-17 17:46:45.621715] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.240 [2024-10-17 17:46:45.621723] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.498 [2024-10-17 17:46:45.631694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.498 qpair failed and we were unable to recover it. 00:22:07.498 [2024-10-17 17:46:45.641610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.498 [2024-10-17 17:46:45.641657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.498 [2024-10-17 17:46:45.641675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.498 [2024-10-17 17:46:45.641684] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.498 [2024-10-17 17:46:45.641694] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.498 [2024-10-17 17:46:45.651749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.498 qpair failed and we were unable to recover it. 00:22:07.498 [2024-10-17 17:46:45.661774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.498 [2024-10-17 17:46:45.661815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.498 [2024-10-17 17:46:45.661834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.498 [2024-10-17 17:46:45.661844] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.498 [2024-10-17 17:46:45.661853] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.671945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.681762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.681805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.681824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.681834] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.681842] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.691961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.701862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.701899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.701918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.701928] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.701937] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.712224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.721939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.721983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.722003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.722013] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.722022] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.732057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.741929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.741974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.741992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.742002] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.742011] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.752252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.761922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.761962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.761980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.761993] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.762002] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.772176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.782162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.782208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.782226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.782236] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.782245] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.792220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.802153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.802200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.802218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.802228] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.802237] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.812293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.822244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.822284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.822302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.822312] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.822321] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.832234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.842217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.842256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.842275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.842285] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.842294] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.852457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.862302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.862340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.862359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.862368] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.862377] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.499 [2024-10-17 17:46:45.872583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.499 qpair failed and we were unable to recover it. 00:22:07.499 [2024-10-17 17:46:45.882318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.499 [2024-10-17 17:46:45.882362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.499 [2024-10-17 17:46:45.882380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.499 [2024-10-17 17:46:45.882389] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.499 [2024-10-17 17:46:45.882398] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:45.892654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:45.902471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:45.902519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:45.902537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:45.902547] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:45.902555] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:45.912555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:45.922505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:45.922548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:45.922566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:45.922575] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:45.922584] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:45.932557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:45.942534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:45.942575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:45.942598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:45.942607] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:45.942616] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:45.952791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:45.962609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:45.962652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:45.962670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:45.962681] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:45.962690] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:45.972613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:45.982598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:45.982642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:45.982660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:45.982670] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:45.982678] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:45.992851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:46.002743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:46.002783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:46.002801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:46.002811] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:46.002820] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:46.012881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:46.022703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:46.022743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:46.022761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:46.022770] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:46.022782] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:46.032848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:46.042838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:46.042884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:46.042902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:46.042912] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:46.042921] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:46.052982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:46.062818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:46.062859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:46.062877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:46.062887] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:46.062896] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:46.073094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.758 qpair failed and we were unable to recover it. 00:22:07.758 [2024-10-17 17:46:46.083033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.758 [2024-10-17 17:46:46.083076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.758 [2024-10-17 17:46:46.083094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.758 [2024-10-17 17:46:46.083104] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.758 [2024-10-17 17:46:46.083113] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.758 [2024-10-17 17:46:46.093027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.759 qpair failed and we were unable to recover it. 00:22:07.759 [2024-10-17 17:46:46.102947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.759 [2024-10-17 17:46:46.102988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.759 [2024-10-17 17:46:46.103006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.759 [2024-10-17 17:46:46.103016] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.759 [2024-10-17 17:46:46.103025] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.759 [2024-10-17 17:46:46.113220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.759 qpair failed and we were unable to recover it. 00:22:07.759 [2024-10-17 17:46:46.122998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.759 [2024-10-17 17:46:46.123044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.759 [2024-10-17 17:46:46.123062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.759 [2024-10-17 17:46:46.123072] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.759 [2024-10-17 17:46:46.123081] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:07.759 [2024-10-17 17:46:46.133253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:07.759 qpair failed and we were unable to recover it. 00:22:07.759 [2024-10-17 17:46:46.143059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:07.759 [2024-10-17 17:46:46.143098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:07.759 [2024-10-17 17:46:46.143116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:07.759 [2024-10-17 17:46:46.143126] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:07.759 [2024-10-17 17:46:46.143135] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.017 [2024-10-17 17:46:46.153316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.017 qpair failed and we were unable to recover it. 00:22:08.017 [2024-10-17 17:46:46.163283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.017 [2024-10-17 17:46:46.163329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.017 [2024-10-17 17:46:46.163347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.017 [2024-10-17 17:46:46.163357] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.017 [2024-10-17 17:46:46.163365] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.017 [2024-10-17 17:46:46.173545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.017 qpair failed and we were unable to recover it. 00:22:08.017 [2024-10-17 17:46:46.183280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.017 [2024-10-17 17:46:46.183322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.017 [2024-10-17 17:46:46.183340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.017 [2024-10-17 17:46:46.183350] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.017 [2024-10-17 17:46:46.183358] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.017 [2024-10-17 17:46:46.193591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.017 qpair failed and we were unable to recover it. 00:22:08.017 [2024-10-17 17:46:46.203307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.017 [2024-10-17 17:46:46.203349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.017 [2024-10-17 17:46:46.203367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.017 [2024-10-17 17:46:46.203380] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.017 [2024-10-17 17:46:46.203389] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.017 [2024-10-17 17:46:46.213460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.017 qpair failed and we were unable to recover it. 00:22:08.017 [2024-10-17 17:46:46.223231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.017 [2024-10-17 17:46:46.223272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.017 [2024-10-17 17:46:46.223291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.017 [2024-10-17 17:46:46.223301] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.017 [2024-10-17 17:46:46.223309] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.017 [2024-10-17 17:46:46.233644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.017 qpair failed and we were unable to recover it. 00:22:08.017 [2024-10-17 17:46:46.243405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.017 [2024-10-17 17:46:46.243450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.017 [2024-10-17 17:46:46.243468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.017 [2024-10-17 17:46:46.243478] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.017 [2024-10-17 17:46:46.243487] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.017 [2024-10-17 17:46:46.253561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.017 qpair failed and we were unable to recover it. 00:22:08.017 [2024-10-17 17:46:46.263440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.017 [2024-10-17 17:46:46.263482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.017 [2024-10-17 17:46:46.263501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.017 [2024-10-17 17:46:46.263510] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.017 [2024-10-17 17:46:46.263519] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.017 [2024-10-17 17:46:46.273717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.017 qpair failed and we were unable to recover it. 00:22:08.017 [2024-10-17 17:46:46.283470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.017 [2024-10-17 17:46:46.283514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.017 [2024-10-17 17:46:46.283532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.017 [2024-10-17 17:46:46.283542] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.017 [2024-10-17 17:46:46.283551] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.017 [2024-10-17 17:46:46.293750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.017 qpair failed and we were unable to recover it. 00:22:08.017 [2024-10-17 17:46:46.303506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.017 [2024-10-17 17:46:46.303550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.017 [2024-10-17 17:46:46.303568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.017 [2024-10-17 17:46:46.303577] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.017 [2024-10-17 17:46:46.303586] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.017 [2024-10-17 17:46:46.313696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.017 qpair failed and we were unable to recover it. 00:22:08.017 [2024-10-17 17:46:46.323696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.018 [2024-10-17 17:46:46.323737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.018 [2024-10-17 17:46:46.323756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.018 [2024-10-17 17:46:46.323765] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.018 [2024-10-17 17:46:46.323774] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.018 [2024-10-17 17:46:46.333699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.018 qpair failed and we were unable to recover it. 00:22:08.018 [2024-10-17 17:46:46.343791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.018 [2024-10-17 17:46:46.343827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.018 [2024-10-17 17:46:46.343846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.018 [2024-10-17 17:46:46.343856] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.018 [2024-10-17 17:46:46.343866] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.018 [2024-10-17 17:46:46.353955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.018 qpair failed and we were unable to recover it. 00:22:08.018 [2024-10-17 17:46:46.363733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.018 [2024-10-17 17:46:46.363774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.018 [2024-10-17 17:46:46.363792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.018 [2024-10-17 17:46:46.363802] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.018 [2024-10-17 17:46:46.363811] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.018 [2024-10-17 17:46:46.374021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.018 qpair failed and we were unable to recover it. 00:22:08.018 [2024-10-17 17:46:46.383714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.018 [2024-10-17 17:46:46.383759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.018 [2024-10-17 17:46:46.383781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.018 [2024-10-17 17:46:46.383791] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.018 [2024-10-17 17:46:46.383800] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.018 [2024-10-17 17:46:46.393893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.018 qpair failed and we were unable to recover it. 00:22:08.018 [2024-10-17 17:46:46.403808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.018 [2024-10-17 17:46:46.403855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.018 [2024-10-17 17:46:46.403873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.018 [2024-10-17 17:46:46.403883] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.018 [2024-10-17 17:46:46.403891] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.276 [2024-10-17 17:46:46.413923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.276 qpair failed and we were unable to recover it. 00:22:08.276 [2024-10-17 17:46:46.423964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.276 [2024-10-17 17:46:46.424013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.276 [2024-10-17 17:46:46.424032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.276 [2024-10-17 17:46:46.424041] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.276 [2024-10-17 17:46:46.424050] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.276 [2024-10-17 17:46:46.433994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.276 qpair failed and we were unable to recover it. 00:22:08.276 [2024-10-17 17:46:46.444004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.276 [2024-10-17 17:46:46.444045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.276 [2024-10-17 17:46:46.444063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.276 [2024-10-17 17:46:46.444073] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.276 [2024-10-17 17:46:46.444082] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.276 [2024-10-17 17:46:46.454303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.276 qpair failed and we were unable to recover it. 00:22:08.276 [2024-10-17 17:46:46.464060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.276 [2024-10-17 17:46:46.464099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.276 [2024-10-17 17:46:46.464117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.276 [2024-10-17 17:46:46.464126] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.276 [2024-10-17 17:46:46.464135] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.276 [2024-10-17 17:46:46.474077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.276 qpair failed and we were unable to recover it. 00:22:08.276 [2024-10-17 17:46:46.484215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.276 [2024-10-17 17:46:46.484258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.276 [2024-10-17 17:46:46.484277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.276 [2024-10-17 17:46:46.484287] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.276 [2024-10-17 17:46:46.484296] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.276 [2024-10-17 17:46:46.494334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.276 qpair failed and we were unable to recover it. 00:22:08.276 [2024-10-17 17:46:46.504187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.276 [2024-10-17 17:46:46.504229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.276 [2024-10-17 17:46:46.504247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.276 [2024-10-17 17:46:46.504257] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.276 [2024-10-17 17:46:46.504265] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.276 [2024-10-17 17:46:46.514427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.276 qpair failed and we were unable to recover it. 00:22:08.276 [2024-10-17 17:46:46.524337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.276 [2024-10-17 17:46:46.524380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.277 [2024-10-17 17:46:46.524397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.277 [2024-10-17 17:46:46.524407] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.277 [2024-10-17 17:46:46.524420] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.277 [2024-10-17 17:46:46.534557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.277 qpair failed and we were unable to recover it. 00:22:08.277 [2024-10-17 17:46:46.544351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.277 [2024-10-17 17:46:46.544390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.277 [2024-10-17 17:46:46.544407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.277 [2024-10-17 17:46:46.544421] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.277 [2024-10-17 17:46:46.544430] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.277 [2024-10-17 17:46:46.554480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.277 qpair failed and we were unable to recover it. 00:22:08.277 [2024-10-17 17:46:46.564325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.277 [2024-10-17 17:46:46.564369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.277 [2024-10-17 17:46:46.564387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.277 [2024-10-17 17:46:46.564397] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.277 [2024-10-17 17:46:46.564406] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.277 [2024-10-17 17:46:46.574575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.277 qpair failed and we were unable to recover it. 00:22:08.277 [2024-10-17 17:46:46.584407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.277 [2024-10-17 17:46:46.584452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.277 [2024-10-17 17:46:46.584470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.277 [2024-10-17 17:46:46.584480] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.277 [2024-10-17 17:46:46.584488] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.277 [2024-10-17 17:46:46.594779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.277 qpair failed and we were unable to recover it. 00:22:08.277 [2024-10-17 17:46:46.604489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.277 [2024-10-17 17:46:46.604532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.277 [2024-10-17 17:46:46.604550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.277 [2024-10-17 17:46:46.604560] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.277 [2024-10-17 17:46:46.604569] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.277 [2024-10-17 17:46:46.614795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.277 qpair failed and we were unable to recover it. 00:22:08.277 [2024-10-17 17:46:46.624530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.277 [2024-10-17 17:46:46.624571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.277 [2024-10-17 17:46:46.624588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.277 [2024-10-17 17:46:46.624598] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.277 [2024-10-17 17:46:46.624607] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.277 [2024-10-17 17:46:46.634806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.277 qpair failed and we were unable to recover it. 00:22:08.277 [2024-10-17 17:46:46.644565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.277 [2024-10-17 17:46:46.644609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.277 [2024-10-17 17:46:46.644627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.277 [2024-10-17 17:46:46.644640] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.277 [2024-10-17 17:46:46.644649] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.277 [2024-10-17 17:46:46.654879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.277 qpair failed and we were unable to recover it. 00:22:08.277 [2024-10-17 17:46:46.664649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.277 [2024-10-17 17:46:46.664698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.277 [2024-10-17 17:46:46.664716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.277 [2024-10-17 17:46:46.664726] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.277 [2024-10-17 17:46:46.664735] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.535 [2024-10-17 17:46:46.674843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.684700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.684743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.684761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.684770] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.684779] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.694893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.704788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.704830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.704847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.704857] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.704866] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.714879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.724961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.725004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.725022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.725032] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.725041] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.735024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.744929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.744969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.744987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.744997] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.745006] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.755131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.764928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.764972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.764990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.765000] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.765009] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.775127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.785011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.785053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.785071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.785080] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.785089] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.795217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.804988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.805029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.805047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.805056] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.805065] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.815200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.825140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.825183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.825207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.825217] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.825225] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.835341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.845080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.845124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.845142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.845151] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.845160] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.855319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.865202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.865245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.865263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.865272] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.865281] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.875398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.885254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.885295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.885314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.885323] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.885332] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.895445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.905377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.905423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.905442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.905452] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.905461] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.536 [2024-10-17 17:46:46.915537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.536 qpair failed and we were unable to recover it. 00:22:08.536 [2024-10-17 17:46:46.925474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.536 [2024-10-17 17:46:46.925518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.536 [2024-10-17 17:46:46.925536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.536 [2024-10-17 17:46:46.925546] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.536 [2024-10-17 17:46:46.925554] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.795 [2024-10-17 17:46:46.935399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.795 qpair failed and we were unable to recover it. 00:22:08.795 [2024-10-17 17:46:46.945512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.795 [2024-10-17 17:46:46.945559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.795 [2024-10-17 17:46:46.945577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.795 [2024-10-17 17:46:46.945587] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.795 [2024-10-17 17:46:46.945596] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.795 [2024-10-17 17:46:46.955648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.795 qpair failed and we were unable to recover it. 00:22:08.795 [2024-10-17 17:46:46.965471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.795 [2024-10-17 17:46:46.965518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.795 [2024-10-17 17:46:46.965536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.795 [2024-10-17 17:46:46.965546] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.795 [2024-10-17 17:46:46.965555] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.795 [2024-10-17 17:46:46.975622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.795 qpair failed and we were unable to recover it. 00:22:08.795 [2024-10-17 17:46:46.985586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.795 [2024-10-17 17:46:46.985624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.795 [2024-10-17 17:46:46.985643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.795 [2024-10-17 17:46:46.985652] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.795 [2024-10-17 17:46:46.985661] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.795 [2024-10-17 17:46:46.995830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.795 qpair failed and we were unable to recover it. 00:22:08.795 [2024-10-17 17:46:47.005600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.795 [2024-10-17 17:46:47.005650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.795 [2024-10-17 17:46:47.005668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.795 [2024-10-17 17:46:47.005678] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.795 [2024-10-17 17:46:47.005686] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.795 [2024-10-17 17:46:47.015623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.795 qpair failed and we were unable to recover it. 00:22:08.795 [2024-10-17 17:46:47.025637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.796 [2024-10-17 17:46:47.025685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.796 [2024-10-17 17:46:47.025704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.796 [2024-10-17 17:46:47.025714] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.796 [2024-10-17 17:46:47.025724] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.796 [2024-10-17 17:46:47.035864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.796 qpair failed and we were unable to recover it. 00:22:08.796 [2024-10-17 17:46:47.045766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.796 [2024-10-17 17:46:47.045812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.796 [2024-10-17 17:46:47.045831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.796 [2024-10-17 17:46:47.045841] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.796 [2024-10-17 17:46:47.045849] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.796 [2024-10-17 17:46:47.055943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.796 qpair failed and we were unable to recover it. 00:22:08.796 [2024-10-17 17:46:47.065737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.796 [2024-10-17 17:46:47.065781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.796 [2024-10-17 17:46:47.065800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.796 [2024-10-17 17:46:47.065809] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.796 [2024-10-17 17:46:47.065818] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.796 [2024-10-17 17:46:47.076012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.796 qpair failed and we were unable to recover it. 00:22:08.796 [2024-10-17 17:46:47.085820] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.796 [2024-10-17 17:46:47.085863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.796 [2024-10-17 17:46:47.085882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.796 [2024-10-17 17:46:47.085891] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.796 [2024-10-17 17:46:47.085904] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.796 [2024-10-17 17:46:47.095838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.796 qpair failed and we were unable to recover it. 00:22:08.796 [2024-10-17 17:46:47.105990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.796 [2024-10-17 17:46:47.106030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.796 [2024-10-17 17:46:47.106048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.796 [2024-10-17 17:46:47.106058] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.796 [2024-10-17 17:46:47.106066] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.796 [2024-10-17 17:46:47.115970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.796 qpair failed and we were unable to recover it. 00:22:08.796 [2024-10-17 17:46:47.125925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.796 [2024-10-17 17:46:47.125966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.796 [2024-10-17 17:46:47.125984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.796 [2024-10-17 17:46:47.125994] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.796 [2024-10-17 17:46:47.126002] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.796 [2024-10-17 17:46:47.136266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.796 qpair failed and we were unable to recover it. 00:22:08.796 [2024-10-17 17:46:47.146092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.796 [2024-10-17 17:46:47.146133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.796 [2024-10-17 17:46:47.146151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.796 [2024-10-17 17:46:47.146161] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.796 [2024-10-17 17:46:47.146169] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.796 [2024-10-17 17:46:47.156284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.796 qpair failed and we were unable to recover it. 00:22:08.796 [2024-10-17 17:46:47.166091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:08.796 [2024-10-17 17:46:47.166136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:08.796 [2024-10-17 17:46:47.166153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:08.796 [2024-10-17 17:46:47.166163] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:08.796 [2024-10-17 17:46:47.166171] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:08.796 [2024-10-17 17:46:47.176057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:08.796 qpair failed and we were unable to recover it. 00:22:09.063 [2024-10-17 17:46:47.186202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.063 [2024-10-17 17:46:47.186250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.063 [2024-10-17 17:46:47.186267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.063 [2024-10-17 17:46:47.186277] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.063 [2024-10-17 17:46:47.186286] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.063 [2024-10-17 17:46:47.196044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.063 qpair failed and we were unable to recover it. 00:22:09.063 [2024-10-17 17:46:47.206183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.063 [2024-10-17 17:46:47.206223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.063 [2024-10-17 17:46:47.206241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.063 [2024-10-17 17:46:47.206250] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.063 [2024-10-17 17:46:47.206259] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.063 [2024-10-17 17:46:47.216250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.226200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.226245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.226266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.226278] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.226289] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.236532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.246398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.246448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.246467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.246476] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.246485] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.256552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.266411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.266457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.266478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.266488] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.266497] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.276594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.286377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.286427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.286446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.286455] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.286464] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.296657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.306485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.306520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.306538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.306548] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.306557] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.316850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.326515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.326557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.326576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.326585] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.326595] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.336714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.346562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.346605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.346623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.346633] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.346641] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.356902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.366598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.366639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.366657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.366667] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.366675] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.376732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.386733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.386777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.386796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.386805] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.386814] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.396895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.406727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.406771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.406789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.406798] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.406808] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.416975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.426814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.426861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.426879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.426888] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.426897] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.064 [2024-10-17 17:46:47.437059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.064 qpair failed and we were unable to recover it. 00:22:09.064 [2024-10-17 17:46:47.446811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.064 [2024-10-17 17:46:47.446854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.064 [2024-10-17 17:46:47.446877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.064 [2024-10-17 17:46:47.446887] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.064 [2024-10-17 17:46:47.446896] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.457094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.466919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.466964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.466982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.326 [2024-10-17 17:46:47.466991] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.326 [2024-10-17 17:46:47.467000] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.477158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.487133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.487177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.487195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.326 [2024-10-17 17:46:47.487205] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.326 [2024-10-17 17:46:47.487214] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.497166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.507053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.507094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.507112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.326 [2024-10-17 17:46:47.507122] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.326 [2024-10-17 17:46:47.507131] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.517252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.527066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.527110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.527129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.326 [2024-10-17 17:46:47.527138] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.326 [2024-10-17 17:46:47.527150] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.537267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.547170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.547206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.547224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.326 [2024-10-17 17:46:47.547234] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.326 [2024-10-17 17:46:47.547242] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.557351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.567233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.567277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.567295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.326 [2024-10-17 17:46:47.567305] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.326 [2024-10-17 17:46:47.567314] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.577321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.587226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.587264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.587283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.326 [2024-10-17 17:46:47.587292] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.326 [2024-10-17 17:46:47.587301] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.597409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.607377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.607415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.607437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.326 [2024-10-17 17:46:47.607447] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.326 [2024-10-17 17:46:47.607455] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.617452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.627297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.627334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.627352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.326 [2024-10-17 17:46:47.627362] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.326 [2024-10-17 17:46:47.627371] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.326 [2024-10-17 17:46:47.637598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-10-17 17:46:47.647496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.326 [2024-10-17 17:46:47.647538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.326 [2024-10-17 17:46:47.647556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.327 [2024-10-17 17:46:47.647566] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.327 [2024-10-17 17:46:47.647574] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.327 [2024-10-17 17:46:47.657637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-10-17 17:46:47.667526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.327 [2024-10-17 17:46:47.667572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.327 [2024-10-17 17:46:47.667590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.327 [2024-10-17 17:46:47.667600] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.327 [2024-10-17 17:46:47.667609] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.327 [2024-10-17 17:46:47.677743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-10-17 17:46:47.687616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.327 [2024-10-17 17:46:47.687655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.327 [2024-10-17 17:46:47.687672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.327 [2024-10-17 17:46:47.687682] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.327 [2024-10-17 17:46:47.687691] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.327 [2024-10-17 17:46:47.697873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-10-17 17:46:47.707587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.327 [2024-10-17 17:46:47.707624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.327 [2024-10-17 17:46:47.707642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.327 [2024-10-17 17:46:47.707654] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.327 [2024-10-17 17:46:47.707663] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.585 [2024-10-17 17:46:47.717731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.585 qpair failed and we were unable to recover it. 00:22:09.585 [2024-10-17 17:46:47.727717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.585 [2024-10-17 17:46:47.727763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.585 [2024-10-17 17:46:47.727783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.585 [2024-10-17 17:46:47.727792] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.585 [2024-10-17 17:46:47.727801] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.585 [2024-10-17 17:46:47.737848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.585 qpair failed and we were unable to recover it. 00:22:09.585 [2024-10-17 17:46:47.747659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.585 [2024-10-17 17:46:47.747701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.585 [2024-10-17 17:46:47.747720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.585 [2024-10-17 17:46:47.747729] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.585 [2024-10-17 17:46:47.747738] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.585 [2024-10-17 17:46:47.757921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.585 qpair failed and we were unable to recover it. 00:22:09.585 [2024-10-17 17:46:47.767754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.585 [2024-10-17 17:46:47.767791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.585 [2024-10-17 17:46:47.767810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.585 [2024-10-17 17:46:47.767819] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.585 [2024-10-17 17:46:47.767828] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.585 [2024-10-17 17:46:47.777944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.585 qpair failed and we were unable to recover it. 00:22:09.585 [2024-10-17 17:46:47.787734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.585 [2024-10-17 17:46:47.787782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.585 [2024-10-17 17:46:47.787800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.585 [2024-10-17 17:46:47.787810] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.585 [2024-10-17 17:46:47.787819] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.585 [2024-10-17 17:46:47.797964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.585 qpair failed and we were unable to recover it. 00:22:09.585 [2024-10-17 17:46:47.807918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.585 [2024-10-17 17:46:47.807963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.585 [2024-10-17 17:46:47.807981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.585 [2024-10-17 17:46:47.807991] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.585 [2024-10-17 17:46:47.808000] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.585 [2024-10-17 17:46:47.818148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.585 qpair failed and we were unable to recover it. 00:22:09.585 [2024-10-17 17:46:47.827946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.585 [2024-10-17 17:46:47.827986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.585 [2024-10-17 17:46:47.828005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.585 [2024-10-17 17:46:47.828015] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.585 [2024-10-17 17:46:47.828024] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.585 [2024-10-17 17:46:47.838046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.585 qpair failed and we were unable to recover it. 00:22:09.585 [2024-10-17 17:46:47.847997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.585 [2024-10-17 17:46:47.848038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.585 [2024-10-17 17:46:47.848056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.585 [2024-10-17 17:46:47.848066] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.585 [2024-10-17 17:46:47.848075] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.585 [2024-10-17 17:46:47.858224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.585 qpair failed and we were unable to recover it. 00:22:09.585 [2024-10-17 17:46:47.868016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.585 [2024-10-17 17:46:47.868057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.585 [2024-10-17 17:46:47.868075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.585 [2024-10-17 17:46:47.868085] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.585 [2024-10-17 17:46:47.868094] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.585 [2024-10-17 17:46:47.878108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.585 qpair failed and we were unable to recover it. 00:22:09.585 [2024-10-17 17:46:47.888176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.585 [2024-10-17 17:46:47.888216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.585 [2024-10-17 17:46:47.888238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.586 [2024-10-17 17:46:47.888248] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.586 [2024-10-17 17:46:47.888257] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.586 [2024-10-17 17:46:47.898402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.586 qpair failed and we were unable to recover it. 00:22:09.586 [2024-10-17 17:46:47.908144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.586 [2024-10-17 17:46:47.908189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.586 [2024-10-17 17:46:47.908207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.586 [2024-10-17 17:46:47.908217] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.586 [2024-10-17 17:46:47.908226] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.586 [2024-10-17 17:46:47.918598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.586 qpair failed and we were unable to recover it. 00:22:09.586 [2024-10-17 17:46:47.928288] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.586 [2024-10-17 17:46:47.928333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.586 [2024-10-17 17:46:47.928352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.586 [2024-10-17 17:46:47.928362] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.586 [2024-10-17 17:46:47.928371] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.586 [2024-10-17 17:46:47.938388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.586 qpair failed and we were unable to recover it. 00:22:09.586 [2024-10-17 17:46:47.948273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.586 [2024-10-17 17:46:47.948315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.586 [2024-10-17 17:46:47.948333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.586 [2024-10-17 17:46:47.948342] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.586 [2024-10-17 17:46:47.948351] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.586 [2024-10-17 17:46:47.958579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.586 qpair failed and we were unable to recover it. 00:22:09.586 [2024-10-17 17:46:47.968407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.586 [2024-10-17 17:46:47.968453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.586 [2024-10-17 17:46:47.968471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.586 [2024-10-17 17:46:47.968481] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.586 [2024-10-17 17:46:47.968493] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.844 [2024-10-17 17:46:47.978563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.844 qpair failed and we were unable to recover it. 00:22:09.844 [2024-10-17 17:46:47.988389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.844 [2024-10-17 17:46:47.988436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.844 [2024-10-17 17:46:47.988454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.844 [2024-10-17 17:46:47.988464] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.844 [2024-10-17 17:46:47.988473] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.844 [2024-10-17 17:46:47.998441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.844 qpair failed and we were unable to recover it. 00:22:09.844 [2024-10-17 17:46:48.008492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.844 [2024-10-17 17:46:48.008529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.844 [2024-10-17 17:46:48.008547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.844 [2024-10-17 17:46:48.008557] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.844 [2024-10-17 17:46:48.008566] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.844 [2024-10-17 17:46:48.018651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.844 qpair failed and we were unable to recover it. 00:22:09.844 [2024-10-17 17:46:48.028440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.844 [2024-10-17 17:46:48.028482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.844 [2024-10-17 17:46:48.028501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.844 [2024-10-17 17:46:48.028510] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.844 [2024-10-17 17:46:48.028519] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.844 [2024-10-17 17:46:48.038569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.844 qpair failed and we were unable to recover it. 00:22:09.844 [2024-10-17 17:46:48.048635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.844 [2024-10-17 17:46:48.048681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.844 [2024-10-17 17:46:48.048699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.844 [2024-10-17 17:46:48.048709] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.844 [2024-10-17 17:46:48.048719] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.844 [2024-10-17 17:46:48.058875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.844 qpair failed and we were unable to recover it. 00:22:09.844 [2024-10-17 17:46:48.068543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.844 [2024-10-17 17:46:48.068590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.844 [2024-10-17 17:46:48.068608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.844 [2024-10-17 17:46:48.068618] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.844 [2024-10-17 17:46:48.068626] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.844 [2024-10-17 17:46:48.078933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.844 qpair failed and we were unable to recover it. 00:22:09.844 [2024-10-17 17:46:48.088730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.844 [2024-10-17 17:46:48.088768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.844 [2024-10-17 17:46:48.088786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.844 [2024-10-17 17:46:48.088796] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.844 [2024-10-17 17:46:48.088804] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.844 [2024-10-17 17:46:48.098658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.844 qpair failed and we were unable to recover it. 00:22:09.844 [2024-10-17 17:46:48.108650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.844 [2024-10-17 17:46:48.108693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.844 [2024-10-17 17:46:48.108710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.844 [2024-10-17 17:46:48.108720] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.844 [2024-10-17 17:46:48.108729] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.844 [2024-10-17 17:46:48.118980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.844 qpair failed and we were unable to recover it. 00:22:09.844 [2024-10-17 17:46:48.128847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.844 [2024-10-17 17:46:48.128890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.844 [2024-10-17 17:46:48.128908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.844 [2024-10-17 17:46:48.128918] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.845 [2024-10-17 17:46:48.128926] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.845 [2024-10-17 17:46:48.138936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.845 qpair failed and we were unable to recover it. 00:22:09.845 [2024-10-17 17:46:48.148803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.845 [2024-10-17 17:46:48.148844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.845 [2024-10-17 17:46:48.148863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.845 [2024-10-17 17:46:48.148875] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.845 [2024-10-17 17:46:48.148884] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.845 [2024-10-17 17:46:48.158894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.845 qpair failed and we were unable to recover it. 00:22:09.845 [2024-10-17 17:46:48.168853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.845 [2024-10-17 17:46:48.168891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.845 [2024-10-17 17:46:48.168910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.845 [2024-10-17 17:46:48.168919] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.845 [2024-10-17 17:46:48.168928] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.845 [2024-10-17 17:46:48.179105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.845 qpair failed and we were unable to recover it. 00:22:09.845 [2024-10-17 17:46:48.188983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.845 [2024-10-17 17:46:48.189024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.845 [2024-10-17 17:46:48.189041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.845 [2024-10-17 17:46:48.189051] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.845 [2024-10-17 17:46:48.189060] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.845 [2024-10-17 17:46:48.199170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.845 qpair failed and we were unable to recover it. 00:22:09.845 [2024-10-17 17:46:48.209065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.845 [2024-10-17 17:46:48.209108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.845 [2024-10-17 17:46:48.209126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.845 [2024-10-17 17:46:48.209136] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.845 [2024-10-17 17:46:48.209145] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:09.845 [2024-10-17 17:46:48.219139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:09.845 qpair failed and we were unable to recover it. 00:22:09.845 [2024-10-17 17:46:48.229029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:09.845 [2024-10-17 17:46:48.229071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:09.845 [2024-10-17 17:46:48.229090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:09.845 [2024-10-17 17:46:48.229099] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:09.845 [2024-10-17 17:46:48.229108] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.103 [2024-10-17 17:46:48.239313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.103 qpair failed and we were unable to recover it. 00:22:10.103 [2024-10-17 17:46:48.249167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.103 [2024-10-17 17:46:48.249212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.103 [2024-10-17 17:46:48.249229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.103 [2024-10-17 17:46:48.249239] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.103 [2024-10-17 17:46:48.249248] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.103 [2024-10-17 17:46:48.259479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.103 qpair failed and we were unable to recover it. 00:22:10.103 [2024-10-17 17:46:48.269211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.103 [2024-10-17 17:46:48.269247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.103 [2024-10-17 17:46:48.269265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.103 [2024-10-17 17:46:48.269274] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.103 [2024-10-17 17:46:48.269283] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.103 [2024-10-17 17:46:48.279347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.103 qpair failed and we were unable to recover it. 00:22:10.103 [2024-10-17 17:46:48.289389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.103 [2024-10-17 17:46:48.289439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.103 [2024-10-17 17:46:48.289456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.103 [2024-10-17 17:46:48.289466] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.103 [2024-10-17 17:46:48.289475] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.103 [2024-10-17 17:46:48.299505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.103 qpair failed and we were unable to recover it. 00:22:10.103 [2024-10-17 17:46:48.309343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.103 [2024-10-17 17:46:48.309385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.103 [2024-10-17 17:46:48.309403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.103 [2024-10-17 17:46:48.309413] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.103 [2024-10-17 17:46:48.309426] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.103 [2024-10-17 17:46:48.319581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.103 qpair failed and we were unable to recover it. 00:22:10.103 [2024-10-17 17:46:48.329322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.103 [2024-10-17 17:46:48.329368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.103 [2024-10-17 17:46:48.329393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.103 [2024-10-17 17:46:48.329403] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.103 [2024-10-17 17:46:48.329412] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.103 [2024-10-17 17:46:48.339488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.103 qpair failed and we were unable to recover it. 00:22:10.103 [2024-10-17 17:46:48.349460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.103 [2024-10-17 17:46:48.349502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.104 [2024-10-17 17:46:48.349520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.104 [2024-10-17 17:46:48.349530] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.104 [2024-10-17 17:46:48.349539] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.104 [2024-10-17 17:46:48.359913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.104 qpair failed and we were unable to recover it. 00:22:10.104 [2024-10-17 17:46:48.369513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.104 [2024-10-17 17:46:48.369552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.104 [2024-10-17 17:46:48.369570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.104 [2024-10-17 17:46:48.369580] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.104 [2024-10-17 17:46:48.369589] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.104 [2024-10-17 17:46:48.379808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.104 qpair failed and we were unable to recover it. 00:22:10.104 [2024-10-17 17:46:48.389497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.104 [2024-10-17 17:46:48.389542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.104 [2024-10-17 17:46:48.389561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.104 [2024-10-17 17:46:48.389571] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.104 [2024-10-17 17:46:48.389579] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.104 [2024-10-17 17:46:48.399692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.104 qpair failed and we were unable to recover it. 00:22:10.104 [2024-10-17 17:46:48.409602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.104 [2024-10-17 17:46:48.409640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.104 [2024-10-17 17:46:48.409658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.104 [2024-10-17 17:46:48.409668] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.104 [2024-10-17 17:46:48.409677] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.104 [2024-10-17 17:46:48.419739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.104 qpair failed and we were unable to recover it. 00:22:10.104 [2024-10-17 17:46:48.429604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.104 [2024-10-17 17:46:48.429645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.104 [2024-10-17 17:46:48.429664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.104 [2024-10-17 17:46:48.429674] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.104 [2024-10-17 17:46:48.429683] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.104 [2024-10-17 17:46:48.439909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.104 qpair failed and we were unable to recover it. 00:22:10.104 [2024-10-17 17:46:48.449721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.104 [2024-10-17 17:46:48.449766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.104 [2024-10-17 17:46:48.449784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.104 [2024-10-17 17:46:48.449794] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.104 [2024-10-17 17:46:48.449803] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.104 [2024-10-17 17:46:48.460036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.104 qpair failed and we were unable to recover it. 00:22:10.104 [2024-10-17 17:46:48.469760] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.104 [2024-10-17 17:46:48.469804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.104 [2024-10-17 17:46:48.469822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.104 [2024-10-17 17:46:48.469832] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.104 [2024-10-17 17:46:48.469841] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.104 [2024-10-17 17:46:48.480121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.104 qpair failed and we were unable to recover it. 00:22:10.104 [2024-10-17 17:46:48.489793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.104 [2024-10-17 17:46:48.489836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.104 [2024-10-17 17:46:48.489854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.104 [2024-10-17 17:46:48.489864] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.104 [2024-10-17 17:46:48.489873] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.362 [2024-10-17 17:46:48.500013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.362 qpair failed and we were unable to recover it. 00:22:10.362 [2024-10-17 17:46:48.509846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.362 [2024-10-17 17:46:48.509895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.362 [2024-10-17 17:46:48.509913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.362 [2024-10-17 17:46:48.509923] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.362 [2024-10-17 17:46:48.509931] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.362 [2024-10-17 17:46:48.520081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.362 qpair failed and we were unable to recover it. 00:22:10.362 [2024-10-17 17:46:48.529921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.362 [2024-10-17 17:46:48.529966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.362 [2024-10-17 17:46:48.529985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.362 [2024-10-17 17:46:48.529995] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.362 [2024-10-17 17:46:48.530004] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.362 [2024-10-17 17:46:48.540120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.362 qpair failed and we were unable to recover it. 00:22:10.362 [2024-10-17 17:46:48.549997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.362 [2024-10-17 17:46:48.550043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.362 [2024-10-17 17:46:48.550061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.362 [2024-10-17 17:46:48.550071] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.362 [2024-10-17 17:46:48.550079] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.362 [2024-10-17 17:46:48.560157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.362 qpair failed and we were unable to recover it. 00:22:10.362 [2024-10-17 17:46:48.570178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.362 [2024-10-17 17:46:48.570225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.362 [2024-10-17 17:46:48.570243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.570253] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.570261] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.363 [2024-10-17 17:46:48.580211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.363 qpair failed and we were unable to recover it. 00:22:10.363 [2024-10-17 17:46:48.590056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.363 [2024-10-17 17:46:48.590097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.363 [2024-10-17 17:46:48.590115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.590128] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.590137] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.363 [2024-10-17 17:46:48.600379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.363 qpair failed and we were unable to recover it. 00:22:10.363 [2024-10-17 17:46:48.610287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.363 [2024-10-17 17:46:48.610332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.363 [2024-10-17 17:46:48.610350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.610359] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.610368] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.363 [2024-10-17 17:46:48.620142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.363 qpair failed and we were unable to recover it. 00:22:10.363 [2024-10-17 17:46:48.630208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.363 [2024-10-17 17:46:48.630248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.363 [2024-10-17 17:46:48.630266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.630275] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.630284] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.363 [2024-10-17 17:46:48.640538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.363 qpair failed and we were unable to recover it. 00:22:10.363 [2024-10-17 17:46:48.650403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.363 [2024-10-17 17:46:48.650456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.363 [2024-10-17 17:46:48.650474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.650484] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.650493] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.363 [2024-10-17 17:46:48.660475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.363 qpair failed and we were unable to recover it. 00:22:10.363 [2024-10-17 17:46:48.670468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.363 [2024-10-17 17:46:48.670508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.363 [2024-10-17 17:46:48.670526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.670536] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.670545] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.363 [2024-10-17 17:46:48.680678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.363 qpair failed and we were unable to recover it. 00:22:10.363 [2024-10-17 17:46:48.690391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.363 [2024-10-17 17:46:48.690441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.363 [2024-10-17 17:46:48.690460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.690469] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.690478] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.363 [2024-10-17 17:46:48.700653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.363 qpair failed and we were unable to recover it. 00:22:10.363 [2024-10-17 17:46:48.710481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.363 [2024-10-17 17:46:48.710526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.363 [2024-10-17 17:46:48.710544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.710554] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.710563] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.363 [2024-10-17 17:46:48.720623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.363 qpair failed and we were unable to recover it. 00:22:10.363 [2024-10-17 17:46:48.730507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.363 [2024-10-17 17:46:48.730547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.363 [2024-10-17 17:46:48.730567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.730577] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.730586] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.363 [2024-10-17 17:46:48.740714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.363 qpair failed and we were unable to recover it. 00:22:10.363 [2024-10-17 17:46:48.750727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.363 [2024-10-17 17:46:48.750771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.363 [2024-10-17 17:46:48.750789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.363 [2024-10-17 17:46:48.750799] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.363 [2024-10-17 17:46:48.750808] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.621 [2024-10-17 17:46:48.760930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.621 qpair failed and we were unable to recover it. 00:22:10.621 [2024-10-17 17:46:48.770714] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.621 [2024-10-17 17:46:48.770758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.621 [2024-10-17 17:46:48.770781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.621 [2024-10-17 17:46:48.770791] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.621 [2024-10-17 17:46:48.770799] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.621 [2024-10-17 17:46:48.780829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.621 qpair failed and we were unable to recover it. 00:22:10.621 [2024-10-17 17:46:48.790722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.621 [2024-10-17 17:46:48.790762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.621 [2024-10-17 17:46:48.790780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.621 [2024-10-17 17:46:48.790790] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.621 [2024-10-17 17:46:48.790799] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.621 [2024-10-17 17:46:48.800996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.621 qpair failed and we were unable to recover it. 00:22:10.621 [2024-10-17 17:46:48.810883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.621 [2024-10-17 17:46:48.810927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.621 [2024-10-17 17:46:48.810946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.621 [2024-10-17 17:46:48.810956] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.621 [2024-10-17 17:46:48.810965] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.621 [2024-10-17 17:46:48.821062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.621 qpair failed and we were unable to recover it. 00:22:10.621 [2024-10-17 17:46:48.830885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:48.830924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:48.830943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:48.830953] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:48.830961] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.622 [2024-10-17 17:46:48.841154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.622 qpair failed and we were unable to recover it. 00:22:10.622 [2024-10-17 17:46:48.850966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:48.851007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:48.851026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:48.851036] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:48.851045] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.622 [2024-10-17 17:46:48.861067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.622 qpair failed and we were unable to recover it. 00:22:10.622 [2024-10-17 17:46:48.870993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:48.871039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:48.871057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:48.871066] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:48.871075] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.622 [2024-10-17 17:46:48.881290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.622 qpair failed and we were unable to recover it. 00:22:10.622 [2024-10-17 17:46:48.891221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:48.891263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:48.891281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:48.891290] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:48.891299] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.622 [2024-10-17 17:46:48.901096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.622 qpair failed and we were unable to recover it. 00:22:10.622 [2024-10-17 17:46:48.911065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:48.911101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:48.911119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:48.911128] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:48.911137] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.622 [2024-10-17 17:46:48.921321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.622 qpair failed and we were unable to recover it. 00:22:10.622 [2024-10-17 17:46:48.931197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:48.931241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:48.931259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:48.931269] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:48.931278] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.622 [2024-10-17 17:46:48.941432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.622 qpair failed and we were unable to recover it. 00:22:10.622 [2024-10-17 17:46:48.951214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:48.951262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:48.951280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:48.951290] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:48.951299] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.622 [2024-10-17 17:46:48.961396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.622 qpair failed and we were unable to recover it. 00:22:10.622 [2024-10-17 17:46:48.971317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:48.971361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:48.971379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:48.971389] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:48.971398] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.622 [2024-10-17 17:46:48.981382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.622 qpair failed and we were unable to recover it. 00:22:10.622 [2024-10-17 17:46:48.991384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:48.991427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:48.991445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:48.991455] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:48.991464] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.622 [2024-10-17 17:46:49.001639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.622 qpair failed and we were unable to recover it. 00:22:10.622 [2024-10-17 17:46:49.011465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.622 [2024-10-17 17:46:49.011509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.622 [2024-10-17 17:46:49.011527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.622 [2024-10-17 17:46:49.011537] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.622 [2024-10-17 17:46:49.011546] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.021644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.031491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.031539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.031557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.031567] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.031579] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.041570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.051685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.051734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.051754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.051764] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.051772] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.061671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.071540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.071581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.071598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.071608] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.071617] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.081822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.091669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.091711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.091729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.091738] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.091747] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.101751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.111717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.111757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.111775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.111784] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.111793] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.121826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.131818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.131857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.131876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.131885] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.131894] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.142048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.151896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.151937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.151955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.151964] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.151973] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.161833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.171911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.171954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.171972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.171981] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.171991] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.181893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.191887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.191927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.191945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.191954] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.191963] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.202224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.211993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.212032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.212053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.212063] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.212072] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.222153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.232119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.232160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.232179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.232188] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.232197] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.242123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:10.881 [2024-10-17 17:46:49.252171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:10.881 [2024-10-17 17:46:49.252213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:10.881 [2024-10-17 17:46:49.252231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:10.881 [2024-10-17 17:46:49.252241] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:10.881 [2024-10-17 17:46:49.252250] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:10.881 [2024-10-17 17:46:49.262305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:10.881 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.272170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.272218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.272236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.140 [2024-10-17 17:46:49.272245] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.140 [2024-10-17 17:46:49.272254] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:11.140 [2024-10-17 17:46:49.282303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.140 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.292239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.292282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.292300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.140 [2024-10-17 17:46:49.292310] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.140 [2024-10-17 17:46:49.292319] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:11.140 [2024-10-17 17:46:49.302407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.140 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.312248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.312292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.312310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.140 [2024-10-17 17:46:49.312321] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.140 [2024-10-17 17:46:49.312329] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:11.140 [2024-10-17 17:46:49.322345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.140 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.332308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.332350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.332369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.140 [2024-10-17 17:46:49.332378] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.140 [2024-10-17 17:46:49.332387] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:11.140 [2024-10-17 17:46:49.342514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.140 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.352315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.352359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.352377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.140 [2024-10-17 17:46:49.352387] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.140 [2024-10-17 17:46:49.352396] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:11.140 [2024-10-17 17:46:49.362454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.140 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.372493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.372533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.372552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.140 [2024-10-17 17:46:49.372561] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.140 [2024-10-17 17:46:49.372570] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:11.140 [2024-10-17 17:46:49.382652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.140 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.392414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.392460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.392482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.140 [2024-10-17 17:46:49.392492] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.140 [2024-10-17 17:46:49.392501] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:11.140 [2024-10-17 17:46:49.402663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.140 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.412599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.412642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.412659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.140 [2024-10-17 17:46:49.412669] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.140 [2024-10-17 17:46:49.412678] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:11.140 [2024-10-17 17:46:49.422688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.140 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.432607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.432654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.432673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.140 [2024-10-17 17:46:49.432682] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.140 [2024-10-17 17:46:49.432691] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x200000330940 00:22:11.140 [2024-10-17 17:46:49.442725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:11.140 qpair failed and we were unable to recover it. 00:22:11.140 [2024-10-17 17:46:49.442834] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:22:11.140 A controller has encountered a failure and is being reset. 00:22:11.140 [2024-10-17 17:46:49.452644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.140 [2024-10-17 17:46:49.452680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.140 [2024-10-17 17:46:49.452702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.141 [2024-10-17 17:46:49.452713] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.141 [2024-10-17 17:46:49.452722] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:22:11.141 [2024-10-17 17:46:49.462787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.141 qpair failed and we were unable to recover it. 00:22:11.141 [2024-10-17 17:46:49.472668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:11.141 [2024-10-17 17:46:49.472712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:11.141 [2024-10-17 17:46:49.472733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:11.141 [2024-10-17 17:46:49.472743] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:11.141 [2024-10-17 17:46:49.472752] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf7c0 00:22:11.141 [2024-10-17 17:46:49.482787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:11.141 qpair failed and we were unable to recover it. 00:22:12.513 Read completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Write completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Write completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Write completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Read completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Read completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Write completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Read completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Read completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Write completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Write completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Read completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Read completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Read completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Write completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.513 Read completed with error (sct=0, sc=8) 00:22:12.513 starting I/O failed 00:22:12.514 Write completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Read completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Write completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Read completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Read completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Read completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Write completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Read completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Write completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Read completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Write completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Read completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Write completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Read completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Write completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 Read completed with error (sct=0, sc=8) 00:22:12.514 starting I/O failed 00:22:12.514 [2024-10-17 17:46:50.487236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.514 [2024-10-17 17:46:50.495639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:12.514 [2024-10-17 17:46:50.495687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:12.514 [2024-10-17 17:46:50.495708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:12.514 [2024-10-17 17:46:50.495719] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:12.514 [2024-10-17 17:46:50.495729] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:22:12.514 [2024-10-17 17:46:50.505834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.514 qpair failed and we were unable to recover it. 00:22:12.514 [2024-10-17 17:46:50.515698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:12.514 [2024-10-17 17:46:50.515740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:12.514 [2024-10-17 17:46:50.515758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:12.514 [2024-10-17 17:46:50.515768] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:12.514 [2024-10-17 17:46:50.515782] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cff40 00:22:12.514 [2024-10-17 17:46:50.525744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:12.514 qpair failed and we were unable to recover it. 00:22:12.514 [2024-10-17 17:46:50.536014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:12.514 [2024-10-17 17:46:50.536083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:12.514 [2024-10-17 17:46:50.536112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:12.514 [2024-10-17 17:46:50.536127] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:12.514 [2024-10-17 17:46:50.536140] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:22:12.514 [2024-10-17 17:46:50.546124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.514 qpair failed and we were unable to recover it. 00:22:12.514 [2024-10-17 17:46:50.555879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:22:12.514 [2024-10-17 17:46:50.555916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:22:12.514 [2024-10-17 17:46:50.555935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:22:12.514 [2024-10-17 17:46:50.555946] nvme_rdma.c:1343:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:22:12.514 [2024-10-17 17:46:50.555955] nvme_rdma.c:2696:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c00 00:22:12.514 [2024-10-17 17:46:50.566033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:12.514 qpair failed and we were unable to recover it. 00:22:12.514 [2024-10-17 17:46:50.566137] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:22:12.514 [2024-10-17 17:46:50.597304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:12.514 Controller properly reset. 00:22:12.514 Initializing NVMe Controllers 00:22:12.514 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.514 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:12.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:12.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:12.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:12.514 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:12.514 Initialization complete. Launching workers. 00:22:12.514 Starting thread on core 1 00:22:12.514 Starting thread on core 2 00:22:12.514 Starting thread on core 3 00:22:12.514 Starting thread on core 0 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:22:12.514 00:22:12.514 real 0m12.667s 00:22:12.514 user 0m27.788s 00:22:12.514 sys 0m3.228s 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:12.514 ************************************ 00:22:12.514 END TEST nvmf_target_disconnect_tc2 00:22:12.514 ************************************ 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:12.514 ************************************ 00:22:12.514 START TEST nvmf_target_disconnect_tc3 00:22:12.514 ************************************ 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:22:12.514 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=723866 00:22:12.515 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:22:12.515 17:46:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:22:14.411 17:46:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 722915 00:22:14.411 17:46:52 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Read completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 Write completed with error (sct=0, sc=8) 00:22:15.786 starting I/O failed 00:22:15.786 [2024-10-17 17:46:53.932035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:16.721 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 722915 Killed "${NVMF_APP[@]}" "$@" 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@507 -- # nvmfpid=724411 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@508 -- # waitforlisten 724411 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@506 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 724411 ']' 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.721 17:46:54 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:16.721 [2024-10-17 17:46:54.813928] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:22:16.721 [2024-10-17 17:46:54.813990] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.721 [2024-10-17 17:46:54.903048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Write completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 Read completed with error (sct=0, sc=8) 00:22:16.721 starting I/O failed 00:22:16.721 [2024-10-17 17:46:54.936317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:16.721 [2024-10-17 17:46:54.948880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.721 [2024-10-17 17:46:54.948913] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.721 [2024-10-17 17:46:54.948923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.721 [2024-10-17 17:46:54.948931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.721 [2024-10-17 17:46:54.948938] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.721 [2024-10-17 17:46:54.950352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:22:16.721 [2024-10-17 17:46:54.950375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:22:16.721 [2024-10-17 17:46:54.950462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:16.721 [2024-10-17 17:46:54.950463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:22:17.286 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.286 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:22:17.286 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:17.286 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.286 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.544 Malloc0 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.544 [2024-10-17 17:46:55.784157] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x597650/0x5a3070) succeed. 00:22:17.544 [2024-10-17 17:46:55.794940] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x598ce0/0x5e4710) succeed. 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.544 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Read completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 Write completed with error (sct=0, sc=8) 00:22:17.801 starting I/O failed 00:22:17.801 [2024-10-17 17:46:55.940688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.801 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.801 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:22:17.801 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.801 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.802 [2024-10-17 17:46:55.949410] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:22:17.802 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.802 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:22:17.802 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.802 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:17.802 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.802 17:46:55 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 723866 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Write completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 Read completed with error (sct=0, sc=8) 00:22:18.733 starting I/O failed 00:22:18.733 [2024-10-17 17:46:56.945345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:18.733 [2024-10-17 17:46:56.945386] nvme_ctrlr.c:4505:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:22:18.733 A controller has encountered a failure and is being reset. 00:22:18.733 Resorting to new failover address 192.168.100.9 00:22:18.733 [2024-10-17 17:46:56.945529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:18.734 [2024-10-17 17:46:56.945611] nvme_rdma.c: 542:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:22:18.734 [2024-10-17 17:46:56.974954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:22:18.734 Controller properly reset. 00:22:22.923 Initializing NVMe Controllers 00:22:22.923 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.923 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.923 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:22.923 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:22.923 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:22.923 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:22.923 Initialization complete. Launching workers. 00:22:22.923 Starting thread on core 1 00:22:22.923 Starting thread on core 2 00:22:22.923 Starting thread on core 3 00:22:22.923 Starting thread on core 0 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:22:22.923 00:22:22.923 real 0m10.283s 00:22:22.923 user 1m3.530s 00:22:22.923 sys 0m1.849s 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:22.923 ************************************ 00:22:22.923 END TEST nvmf_target_disconnect_tc3 00:22:22.923 ************************************ 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:22.923 rmmod nvme_rdma 00:22:22.923 rmmod nvme_fabrics 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@515 -- # '[' -n 724411 ']' 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # killprocess 724411 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 724411 ']' 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 724411 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 724411 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 724411' 00:22:22.923 killing process with pid 724411 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 724411 00:22:22.923 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 724411 00:22:23.182 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:23.182 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:22:23.182 00:22:23.182 real 0m31.374s 00:22:23.182 user 1m59.759s 00:22:23.182 sys 0m10.828s 00:22:23.182 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.182 17:47:01 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:23.182 ************************************ 00:22:23.182 END TEST nvmf_target_disconnect 00:22:23.182 ************************************ 00:22:23.182 17:47:01 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:22:23.182 00:22:23.182 real 5m6.391s 00:22:23.182 user 11m51.085s 00:22:23.182 sys 1m37.471s 00:22:23.182 17:47:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.182 17:47:01 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:23.182 ************************************ 00:22:23.182 END TEST nvmf_host 00:22:23.182 ************************************ 00:22:23.182 17:47:01 nvmf_rdma -- nvmf/nvmf.sh@19 -- # [[ rdma = \t\c\p ]] 00:22:23.182 00:22:23.182 real 17m44.403s 00:22:23.182 user 44m19.386s 00:22:23.182 sys 5m21.165s 00:22:23.182 17:47:01 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:23.182 17:47:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:23.182 ************************************ 00:22:23.182 END TEST nvmf_rdma 00:22:23.182 ************************************ 00:22:23.442 17:47:01 -- spdk/autotest.sh@278 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:23.442 17:47:01 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:23.442 17:47:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:23.442 17:47:01 -- common/autotest_common.sh@10 -- # set +x 00:22:23.442 ************************************ 00:22:23.442 START TEST spdkcli_nvmf_rdma 00:22:23.442 ************************************ 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:22:23.442 * Looking for test storage... 00:22:23.442 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lcov --version 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@344 -- # case "$op" in 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@345 -- # : 1 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # decimal 1 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=1 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 1 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # decimal 2 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@353 -- # local d=2 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@355 -- # echo 2 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@368 -- # return 0 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:23.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.442 --rc genhtml_branch_coverage=1 00:22:23.442 --rc genhtml_function_coverage=1 00:22:23.442 --rc genhtml_legend=1 00:22:23.442 --rc geninfo_all_blocks=1 00:22:23.442 --rc geninfo_unexecuted_blocks=1 00:22:23.442 00:22:23.442 ' 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:23.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.442 --rc genhtml_branch_coverage=1 00:22:23.442 --rc genhtml_function_coverage=1 00:22:23.442 --rc genhtml_legend=1 00:22:23.442 --rc geninfo_all_blocks=1 00:22:23.442 --rc geninfo_unexecuted_blocks=1 00:22:23.442 00:22:23.442 ' 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:23.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.442 --rc genhtml_branch_coverage=1 00:22:23.442 --rc genhtml_function_coverage=1 00:22:23.442 --rc genhtml_legend=1 00:22:23.442 --rc geninfo_all_blocks=1 00:22:23.442 --rc geninfo_unexecuted_blocks=1 00:22:23.442 00:22:23.442 ' 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:23.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.442 --rc genhtml_branch_coverage=1 00:22:23.442 --rc genhtml_function_coverage=1 00:22:23.442 --rc genhtml_legend=1 00:22:23.442 --rc geninfo_all_blocks=1 00:22:23.442 --rc geninfo_unexecuted_blocks=1 00:22:23.442 00:22:23.442 ' 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.442 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:800e967b-538f-e911-906e-001635649f5c 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=800e967b-538f-e911-906e-001635649f5c 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # : 0 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.443 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=725449 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 725449 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 725449 ']' 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.443 17:47:01 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:23.702 [2024-10-17 17:47:01.872410] Starting SPDK v25.01-pre git sha1 264c0dc1a / DPDK 24.03.0 initialization... 00:22:23.702 [2024-10-17 17:47:01.872474] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid725449 ] 00:22:23.702 [2024-10-17 17:47:01.942698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:23.702 [2024-10-17 17:47:01.989889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.702 [2024-10-17 17:47:01.989893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- nvmf/common.sh@467 -- # '[' -z rdma ']' 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.960 17:47:02 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # e810=() 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # x722=() 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # mlx=() 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@347 -- # [[ rdma == rdma ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@348 -- # pci_devs+=("${x722[@]}") 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@349 -- # pci_devs+=("${mlx[@]}") 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@353 -- # [[ mlx5 == mlx5 ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@354 -- # pci_devs=("${mlx[@]}") 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.0 (0x15b3 - 0x1013)' 00:22:30.519 Found 0000:18:00.0 (0x15b3 - 0x1013) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@367 -- # echo 'Found 0000:18:00.1 (0x15b3 - 0x1013)' 00:22:30.519 Found 0000:18:00.1 (0x15b3 - 0x1013) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@368 -- # [[ mlx5_core == unknown ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5_core == unbound ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@376 -- # [[ 0x1013 == \0\x\1\0\1\7 ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@377 -- # [[ 0x1013 == \0\x\1\0\1\9 ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@378 -- # [[ rdma == rdma ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # NVME_CONNECT='nvme connect -i 15' 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@398 -- # [[ mlx5 == e810 ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.0: mlx_0_0' 00:22:30.519 Found net devices under 0000:18:00.0: mlx_0_0 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ rdma == tcp ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:18:00.1: mlx_0_1' 00:22:30.519 Found net devices under 0000:18:00.1: mlx_0_1 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@440 -- # is_hw=yes 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@443 -- # [[ rdma == tcp ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@445 -- # [[ rdma == rdma ]] 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # rdma_device_init 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@527 -- # load_ib_rdma_modules 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # uname 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # '[' Linux '!=' Linux ']' 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe ib_cm 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe ib_core 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe ib_umad 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@69 -- # modprobe ib_uverbs 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@70 -- # modprobe iw_cm 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@71 -- # modprobe rdma_cm 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # modprobe rdma_ucm 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@528 -- # allocate_nic_ips 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@76 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # get_rdma_if_list 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:30.519 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_0 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.8 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.8 ]] 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_0 00:22:30.520 2: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:30.520 link/ether 24:8a:07:b1:b3:94 brd ff:ff:ff:ff:ff:ff 00:22:30.520 altname enp24s0f0np0 00:22:30.520 altname ens785f0np0 00:22:30.520 inet 192.168.100.8/24 scope global mlx_0_0 00:22:30.520 valid_lft forever preferred_lft forever 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@77 -- # for nic_name in $(get_rdma_if_list) 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # get_ip_address mlx_0_1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@78 -- # ip=192.168.100.9 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@79 -- # [[ -z 192.168.100.9 ]] 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@85 -- # ip addr show mlx_0_1 00:22:30.520 3: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:30.520 link/ether 24:8a:07:b1:b3:95 brd ff:ff:ff:ff:ff:ff 00:22:30.520 altname enp24s0f1np1 00:22:30.520 altname ens785f1np1 00:22:30.520 inet 192.168.100.9/24 scope global mlx_0_1 00:22:30.520 valid_lft forever preferred_lft forever 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # return 0 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@481 -- # [[ rdma == \r\d\m\a ]] 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # get_available_rdma_ips 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # get_rdma_if_list 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # mapfile -t rxe_net_devs 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@98 -- # rxe_cfg rxe-net 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@100 -- # (( 2 == 0 )) 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_0 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # for net_dev in "${net_devs[@]}" 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@106 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@107 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@108 -- # echo mlx_0_1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@109 -- # continue 2 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_0 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_0 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_0 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@90 -- # for nic_name in $(get_rdma_if_list) 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@91 -- # get_ip_address mlx_0_1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@116 -- # interface=mlx_0_1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # ip -o -4 addr show mlx_0_1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # awk '{print $4}' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # cut -d/ -f1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@482 -- # RDMA_IP_LIST='192.168.100.8 00:22:30.520 192.168.100.9' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # echo '192.168.100.8 00:22:30.520 192.168.100.9' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # head -n 1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@483 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # echo '192.168.100.8 00:22:30.520 192.168.100.9' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # tail -n +2 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # head -n 1 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@484 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@485 -- # '[' -z 192.168.100.8 ']' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@494 -- # '[' rdma == tcp ']' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@494 -- # '[' rdma == rdma ']' 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- nvmf/common.sh@500 -- # modprobe nvme-rdma 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:30.520 17:47:08 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:30.520 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:30.520 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:30.520 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:30.520 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:30.520 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:30.520 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:30.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:22:30.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:22:30.520 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:30.520 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:22:30.521 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:22:30.521 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:30.521 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:30.521 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:30.521 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:30.521 ' 00:22:33.047 [2024-10-17 17:47:11.384579] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa2bfe0/0x91d6c0) succeed. 00:22:33.047 [2024-10-17 17:47:11.394274] rdma.c:2585:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa2d6c0/0x99d700) succeed. 00:22:34.421 [2024-10-17 17:47:12.672181] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:22:36.947 [2024-10-17 17:47:14.987532] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:22:38.847 [2024-10-17 17:47:16.981964] rdma.c:3040:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:22:40.218 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:40.218 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:40.218 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:40.218 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:40.218 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:40.218 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:40.218 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:40.218 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:40.218 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:40.218 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:40.218 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:40.218 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:40.569 17:47:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:40.569 17:47:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.569 17:47:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:40.569 17:47:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:40.569 17:47:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.569 17:47:18 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:40.569 17:47:18 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:22:40.569 17:47:18 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:40.858 17:47:19 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:40.858 17:47:19 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:40.858 17:47:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:40.858 17:47:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:40.858 17:47:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:40.858 17:47:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:40.858 17:47:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:40.858 17:47:19 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:40.858 17:47:19 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:40.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:40.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:40.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:40.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:22:40.858 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:22:40.858 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:40.858 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:40.858 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:40.858 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:40.858 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:40.858 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:40.858 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:40.858 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:40.858 ' 00:22:46.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:22:46.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:22:46.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:46.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:22:46.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:22:46.168 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:22:46.168 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:22:46.168 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:22:46.168 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:22:46.168 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:22:46.168 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:22:46.168 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:22:46.168 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:22:46.168 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 725449 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 725449 ']' 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 725449 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 725449 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 725449' 00:22:46.168 killing process with pid 725449 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 725449 00:22:46.168 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 725449 00:22:46.426 17:47:24 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:22:46.426 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:46.426 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # sync 00:22:46.426 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == tcp ']' 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # '[' rdma == rdma ']' 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set +e 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@126 -- # modprobe -v -r nvme-rdma 00:22:46.427 rmmod nvme_rdma 00:22:46.427 rmmod nvme_fabrics 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@128 -- # set -e 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@129 -- # return 0 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- nvmf/common.sh@521 -- # [[ rdma == \t\c\p ]] 00:22:46.427 00:22:46.427 real 0m23.030s 00:22:46.427 user 0m49.236s 00:22:46.427 sys 0m5.956s 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.427 17:47:24 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:46.427 ************************************ 00:22:46.427 END TEST spdkcli_nvmf_rdma 00:22:46.427 ************************************ 00:22:46.427 17:47:24 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:46.427 17:47:24 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:22:46.427 17:47:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:46.427 17:47:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:46.427 17:47:24 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:22:46.427 17:47:24 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:22:46.427 17:47:24 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:22:46.427 17:47:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:46.427 17:47:24 -- common/autotest_common.sh@10 -- # set +x 00:22:46.427 17:47:24 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:22:46.427 17:47:24 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:22:46.427 17:47:24 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:22:46.427 17:47:24 -- common/autotest_common.sh@10 -- # set +x 00:22:50.608 INFO: APP EXITING 00:22:50.608 INFO: killing all VMs 00:22:50.608 INFO: killing vhost app 00:22:50.608 WARN: no vhost pid file found 00:22:50.608 INFO: EXIT DONE 00:22:53.889 Waiting for block devices as requested 00:22:53.889 0000:5e:00.0 (144d a80a): vfio-pci -> nvme 00:22:53.889 0000:af:00.0 (8086 2701): vfio-pci -> nvme 00:22:53.889 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:53.889 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:54.146 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:54.146 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:54.146 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:54.146 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:54.403 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:54.403 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:54.403 0000:b0:00.0 (8086 2701): vfio-pci -> nvme 00:22:54.659 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:54.659 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:54.659 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:54.916 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:54.916 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:54.916 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:55.173 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:55.173 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:59.358 Cleaning 00:22:59.358 Removing: /var/run/dpdk/spdk0/config 00:22:59.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:59.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:59.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:59.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:59.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:22:59.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:22:59.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:22:59.358 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:22:59.358 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:59.358 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:59.358 Removing: /var/run/dpdk/spdk1/config 00:22:59.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:22:59.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:22:59.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:22:59.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:22:59.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:22:59.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:22:59.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:22:59.358 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:22:59.358 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:22:59.358 Removing: /var/run/dpdk/spdk1/hugepage_info 00:22:59.358 Removing: /var/run/dpdk/spdk1/mp_socket 00:22:59.358 Removing: /var/run/dpdk/spdk2/config 00:22:59.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:22:59.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:22:59.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:22:59.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:22:59.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:22:59.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:22:59.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:22:59.358 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:22:59.358 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:22:59.358 Removing: /var/run/dpdk/spdk2/hugepage_info 00:22:59.358 Removing: /var/run/dpdk/spdk3/config 00:22:59.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:22:59.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:22:59.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:22:59.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:22:59.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:22:59.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:22:59.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:22:59.358 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:22:59.358 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:22:59.358 Removing: /var/run/dpdk/spdk3/hugepage_info 00:22:59.358 Removing: /var/run/dpdk/spdk4/config 00:22:59.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:22:59.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:22:59.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:22:59.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:22:59.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:22:59.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:22:59.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:22:59.358 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:22:59.358 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:22:59.358 Removing: /var/run/dpdk/spdk4/hugepage_info 00:22:59.358 Removing: /dev/shm/bdevperf_trace.pid510857 00:22:59.358 Removing: /dev/shm/bdev_svc_trace.1 00:22:59.358 Removing: /dev/shm/nvmf_trace.0 00:22:59.358 Removing: /dev/shm/spdk_tgt_trace.pid475349 00:22:59.358 Removing: /var/run/dpdk/spdk0 00:22:59.358 Removing: /var/run/dpdk/spdk1 00:22:59.358 Removing: /var/run/dpdk/spdk2 00:22:59.358 Removing: /var/run/dpdk/spdk3 00:22:59.358 Removing: /var/run/dpdk/spdk4 00:22:59.358 Removing: /var/run/dpdk/spdk_pid474700 00:22:59.358 Removing: /var/run/dpdk/spdk_pid475349 00:22:59.358 Removing: /var/run/dpdk/spdk_pid475876 00:22:59.358 Removing: /var/run/dpdk/spdk_pid476646 00:22:59.358 Removing: /var/run/dpdk/spdk_pid476670 00:22:59.358 Removing: /var/run/dpdk/spdk_pid477459 00:22:59.358 Removing: /var/run/dpdk/spdk_pid477626 00:22:59.358 Removing: /var/run/dpdk/spdk_pid477932 00:22:59.358 Removing: /var/run/dpdk/spdk_pid481694 00:22:59.358 Removing: /var/run/dpdk/spdk_pid482359 00:22:59.358 Removing: /var/run/dpdk/spdk_pid482602 00:22:59.358 Removing: /var/run/dpdk/spdk_pid482841 00:22:59.358 Removing: /var/run/dpdk/spdk_pid483099 00:22:59.358 Removing: /var/run/dpdk/spdk_pid483348 00:22:59.358 Removing: /var/run/dpdk/spdk_pid483557 00:22:59.358 Removing: /var/run/dpdk/spdk_pid483755 00:22:59.358 Removing: /var/run/dpdk/spdk_pid483994 00:22:59.358 Removing: /var/run/dpdk/spdk_pid484588 00:22:59.358 Removing: /var/run/dpdk/spdk_pid487021 00:22:59.358 Removing: /var/run/dpdk/spdk_pid487230 00:22:59.358 Removing: /var/run/dpdk/spdk_pid487436 00:22:59.358 Removing: /var/run/dpdk/spdk_pid487454 00:22:59.358 Removing: /var/run/dpdk/spdk_pid487850 00:22:59.358 Removing: /var/run/dpdk/spdk_pid488021 00:22:59.358 Removing: /var/run/dpdk/spdk_pid488353 00:22:59.358 Removing: /var/run/dpdk/spdk_pid488428 00:22:59.358 Removing: /var/run/dpdk/spdk_pid488639 00:22:59.358 Removing: /var/run/dpdk/spdk_pid488661 00:22:59.358 Removing: /var/run/dpdk/spdk_pid488865 00:22:59.358 Removing: /var/run/dpdk/spdk_pid489001 00:22:59.358 Removing: /var/run/dpdk/spdk_pid489391 00:22:59.358 Removing: /var/run/dpdk/spdk_pid489548 00:22:59.358 Removing: /var/run/dpdk/spdk_pid489862 00:22:59.358 Removing: /var/run/dpdk/spdk_pid493296 00:22:59.358 Removing: /var/run/dpdk/spdk_pid496826 00:22:59.358 Removing: /var/run/dpdk/spdk_pid506031 00:22:59.358 Removing: /var/run/dpdk/spdk_pid506595 00:22:59.358 Removing: /var/run/dpdk/spdk_pid510857 00:22:59.358 Removing: /var/run/dpdk/spdk_pid511048 00:22:59.358 Removing: /var/run/dpdk/spdk_pid514580 00:22:59.358 Removing: /var/run/dpdk/spdk_pid519497 00:22:59.358 Removing: /var/run/dpdk/spdk_pid521667 00:22:59.358 Removing: /var/run/dpdk/spdk_pid530093 00:22:59.358 Removing: /var/run/dpdk/spdk_pid552583 00:22:59.358 Removing: /var/run/dpdk/spdk_pid555970 00:22:59.358 Removing: /var/run/dpdk/spdk_pid597794 00:22:59.358 Removing: /var/run/dpdk/spdk_pid602229 00:22:59.358 Removing: /var/run/dpdk/spdk_pid607331 00:22:59.358 Removing: /var/run/dpdk/spdk_pid614773 00:22:59.358 Removing: /var/run/dpdk/spdk_pid650787 00:22:59.358 Removing: /var/run/dpdk/spdk_pid651554 00:22:59.358 Removing: /var/run/dpdk/spdk_pid652278 00:22:59.358 Removing: /var/run/dpdk/spdk_pid653165 00:22:59.358 Removing: /var/run/dpdk/spdk_pid657470 00:22:59.359 Removing: /var/run/dpdk/spdk_pid664132 00:22:59.359 Removing: /var/run/dpdk/spdk_pid664842 00:22:59.359 Removing: /var/run/dpdk/spdk_pid665475 00:22:59.359 Removing: /var/run/dpdk/spdk_pid666213 00:22:59.359 Removing: /var/run/dpdk/spdk_pid666517 00:22:59.359 Removing: /var/run/dpdk/spdk_pid670370 00:22:59.359 Removing: /var/run/dpdk/spdk_pid670372 00:22:59.359 Removing: /var/run/dpdk/spdk_pid674086 00:22:59.359 Removing: /var/run/dpdk/spdk_pid674454 00:22:59.359 Removing: /var/run/dpdk/spdk_pid674944 00:22:59.359 Removing: /var/run/dpdk/spdk_pid675538 00:22:59.359 Removing: /var/run/dpdk/spdk_pid675543 00:22:59.359 Removing: /var/run/dpdk/spdk_pid679611 00:22:59.359 Removing: /var/run/dpdk/spdk_pid680065 00:22:59.359 Removing: /var/run/dpdk/spdk_pid683658 00:22:59.359 Removing: /var/run/dpdk/spdk_pid685837 00:22:59.359 Removing: /var/run/dpdk/spdk_pid690524 00:22:59.359 Removing: /var/run/dpdk/spdk_pid699614 00:22:59.359 Removing: /var/run/dpdk/spdk_pid699627 00:22:59.359 Removing: /var/run/dpdk/spdk_pid716808 00:22:59.359 Removing: /var/run/dpdk/spdk_pid716997 00:22:59.359 Removing: /var/run/dpdk/spdk_pid721958 00:22:59.359 Removing: /var/run/dpdk/spdk_pid722364 00:22:59.359 Removing: /var/run/dpdk/spdk_pid723866 00:22:59.359 Removing: /var/run/dpdk/spdk_pid725449 00:22:59.359 Clean 00:22:59.359 17:47:37 -- common/autotest_common.sh@1451 -- # return 0 00:22:59.359 17:47:37 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:22:59.359 17:47:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.359 17:47:37 -- common/autotest_common.sh@10 -- # set +x 00:22:59.359 17:47:37 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:22:59.359 17:47:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.359 17:47:37 -- common/autotest_common.sh@10 -- # set +x 00:22:59.616 17:47:37 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:22:59.616 17:47:37 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:22:59.616 17:47:37 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:22:59.616 17:47:37 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:22:59.616 17:47:37 -- spdk/autotest.sh@394 -- # hostname 00:22:59.616 17:47:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-29 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:22:59.616 geninfo: WARNING: invalid characters removed from testname! 00:23:17.684 17:47:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:20.211 17:47:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:22.112 17:48:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:24.014 17:48:02 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:25.915 17:48:03 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:27.291 17:48:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:23:29.193 17:48:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:29.193 17:48:07 -- common/autotest_common.sh@1690 -- $ [[ y == y ]] 00:23:29.193 17:48:07 -- common/autotest_common.sh@1691 -- $ lcov --version 00:23:29.193 17:48:07 -- common/autotest_common.sh@1691 -- $ awk '{print $NF}' 00:23:29.194 17:48:07 -- common/autotest_common.sh@1691 -- $ lt 1.15 2 00:23:29.194 17:48:07 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:23:29.194 17:48:07 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:23:29.194 17:48:07 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:23:29.194 17:48:07 -- scripts/common.sh@336 -- $ IFS=.-: 00:23:29.194 17:48:07 -- scripts/common.sh@336 -- $ read -ra ver1 00:23:29.194 17:48:07 -- scripts/common.sh@337 -- $ IFS=.-: 00:23:29.194 17:48:07 -- scripts/common.sh@337 -- $ read -ra ver2 00:23:29.194 17:48:07 -- scripts/common.sh@338 -- $ local 'op=<' 00:23:29.194 17:48:07 -- scripts/common.sh@340 -- $ ver1_l=2 00:23:29.194 17:48:07 -- scripts/common.sh@341 -- $ ver2_l=1 00:23:29.194 17:48:07 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:23:29.194 17:48:07 -- scripts/common.sh@344 -- $ case "$op" in 00:23:29.194 17:48:07 -- scripts/common.sh@345 -- $ : 1 00:23:29.194 17:48:07 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:23:29.194 17:48:07 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:29.194 17:48:07 -- scripts/common.sh@365 -- $ decimal 1 00:23:29.194 17:48:07 -- scripts/common.sh@353 -- $ local d=1 00:23:29.194 17:48:07 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:23:29.194 17:48:07 -- scripts/common.sh@355 -- $ echo 1 00:23:29.194 17:48:07 -- scripts/common.sh@365 -- $ ver1[v]=1 00:23:29.194 17:48:07 -- scripts/common.sh@366 -- $ decimal 2 00:23:29.194 17:48:07 -- scripts/common.sh@353 -- $ local d=2 00:23:29.194 17:48:07 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:23:29.194 17:48:07 -- scripts/common.sh@355 -- $ echo 2 00:23:29.194 17:48:07 -- scripts/common.sh@366 -- $ ver2[v]=2 00:23:29.194 17:48:07 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:23:29.194 17:48:07 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:23:29.194 17:48:07 -- scripts/common.sh@368 -- $ return 0 00:23:29.194 17:48:07 -- common/autotest_common.sh@1692 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:29.194 17:48:07 -- common/autotest_common.sh@1704 -- $ export 'LCOV_OPTS= 00:23:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.194 --rc genhtml_branch_coverage=1 00:23:29.194 --rc genhtml_function_coverage=1 00:23:29.194 --rc genhtml_legend=1 00:23:29.194 --rc geninfo_all_blocks=1 00:23:29.194 --rc geninfo_unexecuted_blocks=1 00:23:29.194 00:23:29.194 ' 00:23:29.194 17:48:07 -- common/autotest_common.sh@1704 -- $ LCOV_OPTS=' 00:23:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.194 --rc genhtml_branch_coverage=1 00:23:29.194 --rc genhtml_function_coverage=1 00:23:29.194 --rc genhtml_legend=1 00:23:29.194 --rc geninfo_all_blocks=1 00:23:29.194 --rc geninfo_unexecuted_blocks=1 00:23:29.194 00:23:29.194 ' 00:23:29.194 17:48:07 -- common/autotest_common.sh@1705 -- $ export 'LCOV=lcov 00:23:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.194 --rc genhtml_branch_coverage=1 00:23:29.194 --rc genhtml_function_coverage=1 00:23:29.194 --rc genhtml_legend=1 00:23:29.194 --rc geninfo_all_blocks=1 00:23:29.194 --rc geninfo_unexecuted_blocks=1 00:23:29.194 00:23:29.194 ' 00:23:29.194 17:48:07 -- common/autotest_common.sh@1705 -- $ LCOV='lcov 00:23:29.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:29.194 --rc genhtml_branch_coverage=1 00:23:29.194 --rc genhtml_function_coverage=1 00:23:29.194 --rc genhtml_legend=1 00:23:29.194 --rc geninfo_all_blocks=1 00:23:29.194 --rc geninfo_unexecuted_blocks=1 00:23:29.194 00:23:29.194 ' 00:23:29.194 17:48:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:29.194 17:48:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:23:29.194 17:48:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:23:29.194 17:48:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:29.194 17:48:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:29.194 17:48:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.194 17:48:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.194 17:48:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.194 17:48:07 -- paths/export.sh@5 -- $ export PATH 00:23:29.194 17:48:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:29.194 17:48:07 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:23:29.194 17:48:07 -- common/autobuild_common.sh@486 -- $ date +%s 00:23:29.194 17:48:07 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1729180087.XXXXXX 00:23:29.194 17:48:07 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1729180087.idctXM 00:23:29.194 17:48:07 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:23:29.194 17:48:07 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:23:29.194 17:48:07 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:23:29.194 17:48:07 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:23:29.194 17:48:07 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:23:29.194 17:48:07 -- common/autobuild_common.sh@502 -- $ get_config_params 00:23:29.194 17:48:07 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:23:29.194 17:48:07 -- common/autotest_common.sh@10 -- $ set +x 00:23:29.194 17:48:07 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:23:29.194 17:48:07 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:23:29.194 17:48:07 -- pm/common@17 -- $ local monitor 00:23:29.194 17:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:29.194 17:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:29.194 17:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:29.194 17:48:07 -- pm/common@21 -- $ date +%s 00:23:29.194 17:48:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:29.194 17:48:07 -- pm/common@21 -- $ date +%s 00:23:29.194 17:48:07 -- pm/common@25 -- $ sleep 1 00:23:29.194 17:48:07 -- pm/common@21 -- $ date +%s 00:23:29.194 17:48:07 -- pm/common@21 -- $ date +%s 00:23:29.194 17:48:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729180087 00:23:29.194 17:48:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729180087 00:23:29.194 17:48:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729180087 00:23:29.194 17:48:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1729180087 00:23:29.453 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729180087_collect-vmstat.pm.log 00:23:29.453 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729180087_collect-cpu-load.pm.log 00:23:29.453 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729180087_collect-cpu-temp.pm.log 00:23:29.453 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1729180087_collect-bmc-pm.bmc.pm.log 00:23:30.390 17:48:08 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:23:30.390 17:48:08 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:23:30.390 17:48:08 -- spdk/autopackage.sh@14 -- $ timing_finish 00:23:30.390 17:48:08 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:30.390 17:48:08 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:30.390 17:48:08 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:23:30.390 17:48:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:23:30.390 17:48:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:23:30.390 17:48:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:23:30.390 17:48:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:30.390 17:48:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:23:30.390 17:48:08 -- pm/common@44 -- $ pid=738781 00:23:30.390 17:48:08 -- pm/common@50 -- $ kill -TERM 738781 00:23:30.390 17:48:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:30.390 17:48:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:23:30.390 17:48:08 -- pm/common@44 -- $ pid=738782 00:23:30.390 17:48:08 -- pm/common@50 -- $ kill -TERM 738782 00:23:30.390 17:48:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:30.390 17:48:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:23:30.390 17:48:08 -- pm/common@44 -- $ pid=738785 00:23:30.390 17:48:08 -- pm/common@50 -- $ kill -TERM 738785 00:23:30.390 17:48:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:23:30.390 17:48:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:23:30.390 17:48:08 -- pm/common@44 -- $ pid=738810 00:23:30.390 17:48:08 -- pm/common@50 -- $ sudo -E kill -TERM 738810 00:23:30.390 + [[ -n 401108 ]] 00:23:30.390 + sudo kill 401108 00:23:30.400 [Pipeline] } 00:23:30.415 [Pipeline] // stage 00:23:30.420 [Pipeline] } 00:23:30.433 [Pipeline] // timeout 00:23:30.438 [Pipeline] } 00:23:30.451 [Pipeline] // catchError 00:23:30.456 [Pipeline] } 00:23:30.472 [Pipeline] // wrap 00:23:30.478 [Pipeline] } 00:23:30.490 [Pipeline] // catchError 00:23:30.499 [Pipeline] stage 00:23:30.501 [Pipeline] { (Epilogue) 00:23:30.514 [Pipeline] catchError 00:23:30.518 [Pipeline] { 00:23:30.530 [Pipeline] echo 00:23:30.532 Cleanup processes 00:23:30.537 [Pipeline] sh 00:23:30.821 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:30.821 738928 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:23:30.821 739192 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:30.834 [Pipeline] sh 00:23:31.117 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:23:31.117 ++ grep -v 'sudo pgrep' 00:23:31.117 ++ awk '{print $1}' 00:23:31.117 + sudo kill -9 738928 00:23:31.128 [Pipeline] sh 00:23:31.409 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:39.538 [Pipeline] sh 00:23:39.822 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:39.822 Artifacts sizes are good 00:23:39.837 [Pipeline] archiveArtifacts 00:23:39.845 Archiving artifacts 00:23:39.964 [Pipeline] sh 00:23:40.249 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:23:40.263 [Pipeline] cleanWs 00:23:40.273 [WS-CLEANUP] Deleting project workspace... 00:23:40.273 [WS-CLEANUP] Deferred wipeout is used... 00:23:40.279 [WS-CLEANUP] done 00:23:40.281 [Pipeline] } 00:23:40.299 [Pipeline] // catchError 00:23:40.310 [Pipeline] sh 00:23:40.615 + logger -p user.info -t JENKINS-CI 00:23:40.643 [Pipeline] } 00:23:40.656 [Pipeline] // stage 00:23:40.661 [Pipeline] } 00:23:40.674 [Pipeline] // node 00:23:40.680 [Pipeline] End of Pipeline 00:23:40.720 Finished: SUCCESS